I0123 23:39:15.570976 8 test_context.go:416] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0123 23:39:15.571480 8 e2e.go:109] Starting e2e run "5f7def5b-5066-4eb6-93e4-ded65b2168a6" on Ginkgo node 1 {"msg":"Test Suite starting","total":278,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1579822754 - Will randomize all specs Will run 278 of 4841 specs Jan 23 23:39:15.618: INFO: >>> kubeConfig: /root/.kube/config Jan 23 23:39:15.622: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jan 23 23:39:15.654: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jan 23 23:39:15.690: INFO: 10 / 10 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jan 23 23:39:15.690: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jan 23 23:39:15.690: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jan 23 23:39:15.703: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jan 23 23:39:15.703: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Jan 23 23:39:15.703: INFO: e2e test version: v1.18.0-alpha.1.106+4f70231ce7736c Jan 23 23:39:15.705: INFO: kube-apiserver version: v1.17.0 Jan 23 23:39:15.705: INFO: >>> kubeConfig: /root/.kube/config Jan 23 23:39:15.711: INFO: Cluster IP family: ipv4 SSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 23 23:39:15.712: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts Jan 23 23:39:15.821: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Jan 23 23:39:35.926: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1845 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 23 23:39:35.926: INFO: >>> kubeConfig: /root/.kube/config I0123 23:39:35.994852 8 log.go:172] (0xc002ccf290) (0xc000c6f5e0) Create stream I0123 23:39:35.994920 8 log.go:172] (0xc002ccf290) (0xc000c6f5e0) Stream added, broadcasting: 1 I0123 23:39:36.000538 8 log.go:172] (0xc002ccf290) Reply frame received for 1 I0123 23:39:36.000640 8 log.go:172] (0xc002ccf290) (0xc000d086e0) Create stream I0123 23:39:36.000652 8 log.go:172] (0xc002ccf290) (0xc000d086e0) Stream added, broadcasting: 3 I0123 23:39:36.002053 8 log.go:172] (0xc002ccf290) Reply frame received for 3 I0123 23:39:36.002074 8 log.go:172] (0xc002ccf290) (0xc000cc52c0) Create stream I0123 23:39:36.002082 8 log.go:172] (0xc002ccf290) (0xc000cc52c0) Stream added, broadcasting: 5 I0123 23:39:36.003685 8 log.go:172] (0xc002ccf290) Reply frame received for 5 I0123 23:39:36.069432 8 log.go:172] (0xc002ccf290) Data frame received for 3 I0123 23:39:36.069534 8 log.go:172] (0xc000d086e0) (3) Data frame handling I0123 23:39:36.069565 8 log.go:172] (0xc000d086e0) (3) Data frame sent I0123 23:39:36.174611 8 log.go:172] (0xc002ccf290) Data frame received for 1 I0123 23:39:36.174830 8 log.go:172] (0xc002ccf290) (0xc000d086e0) Stream removed, broadcasting: 3 I0123 23:39:36.174887 8 log.go:172] (0xc000c6f5e0) (1) Data frame handling I0123 23:39:36.174907 8 log.go:172] (0xc000c6f5e0) (1) Data frame sent I0123 23:39:36.174923 8 log.go:172] (0xc002ccf290) (0xc000cc52c0) Stream removed, broadcasting: 5 I0123 23:39:36.174954 8 log.go:172] (0xc002ccf290) (0xc000c6f5e0) Stream removed, broadcasting: 1 I0123 23:39:36.174966 8 log.go:172] (0xc002ccf290) Go away received I0123 23:39:36.176705 8 log.go:172] (0xc002ccf290) (0xc000c6f5e0) Stream removed, broadcasting: 1 I0123 23:39:36.176943 8 log.go:172] (0xc002ccf290) (0xc000d086e0) Stream removed, broadcasting: 3 I0123 23:39:36.176967 8 log.go:172] (0xc002ccf290) (0xc000cc52c0) Stream removed, broadcasting: 5 Jan 23 23:39:36.177: INFO: Exec stderr: "" Jan 23 23:39:36.177: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1845 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 23 23:39:36.177: INFO: >>> kubeConfig: /root/.kube/config I0123 23:39:36.223320 8 log.go:172] (0xc002f02370) (0xc000d2a960) Create stream I0123 23:39:36.223420 8 log.go:172] (0xc002f02370) (0xc000d2a960) Stream added, broadcasting: 1 I0123 23:39:36.226467 8 log.go:172] (0xc002f02370) Reply frame received for 1 I0123 23:39:36.226529 8 log.go:172] (0xc002f02370) (0xc000d2aa00) Create stream I0123 23:39:36.226539 8 log.go:172] (0xc002f02370) (0xc000d2aa00) Stream added, broadcasting: 3 I0123 23:39:36.228168 8 log.go:172] (0xc002f02370) Reply frame received for 3 I0123 23:39:36.228211 8 log.go:172] (0xc002f02370) (0xc002d57b80) Create stream I0123 23:39:36.228221 8 log.go:172] (0xc002f02370) (0xc002d57b80) Stream added, broadcasting: 5 I0123 23:39:36.230675 8 log.go:172] (0xc002f02370) Reply frame received for 5 I0123 23:39:36.320411 8 log.go:172] (0xc002f02370) Data frame received for 3 I0123 23:39:36.320521 8 log.go:172] (0xc000d2aa00) (3) Data frame handling I0123 23:39:36.320544 8 log.go:172] (0xc000d2aa00) (3) Data frame sent I0123 23:39:36.415321 8 log.go:172] (0xc002f02370) Data frame received for 1 I0123 23:39:36.415498 8 log.go:172] (0xc000d2a960) (1) Data frame handling I0123 23:39:36.415522 8 log.go:172] (0xc000d2a960) (1) Data frame sent I0123 23:39:36.415545 8 log.go:172] (0xc002f02370) (0xc000d2a960) Stream removed, broadcasting: 1 I0123 23:39:36.416482 8 log.go:172] (0xc002f02370) (0xc000d2aa00) Stream removed, broadcasting: 3 I0123 23:39:36.416529 8 log.go:172] (0xc002f02370) (0xc002d57b80) Stream removed, broadcasting: 5 I0123 23:39:36.416572 8 log.go:172] (0xc002f02370) Go away received I0123 23:39:36.416600 8 log.go:172] (0xc002f02370) (0xc000d2a960) Stream removed, broadcasting: 1 I0123 23:39:36.416618 8 log.go:172] (0xc002f02370) (0xc000d2aa00) Stream removed, broadcasting: 3 I0123 23:39:36.416634 8 log.go:172] (0xc002f02370) (0xc002d57b80) Stream removed, broadcasting: 5 Jan 23 23:39:36.416: INFO: Exec stderr: "" Jan 23 23:39:36.416: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1845 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 23 23:39:36.416: INFO: >>> kubeConfig: /root/.kube/config I0123 23:39:36.474096 8 log.go:172] (0xc00260e9a0) (0xc00204c1e0) Create stream I0123 23:39:36.474231 8 log.go:172] (0xc00260e9a0) (0xc00204c1e0) Stream added, broadcasting: 1 I0123 23:39:36.478590 8 log.go:172] (0xc00260e9a0) Reply frame received for 1 I0123 23:39:36.478812 8 log.go:172] (0xc00260e9a0) (0xc000960140) Create stream I0123 23:39:36.478856 8 log.go:172] (0xc00260e9a0) (0xc000960140) Stream added, broadcasting: 3 I0123 23:39:36.480553 8 log.go:172] (0xc00260e9a0) Reply frame received for 3 I0123 23:39:36.480624 8 log.go:172] (0xc00260e9a0) (0xc000960280) Create stream I0123 23:39:36.480629 8 log.go:172] (0xc00260e9a0) (0xc000960280) Stream added, broadcasting: 5 I0123 23:39:36.482265 8 log.go:172] (0xc00260e9a0) Reply frame received for 5 I0123 23:39:36.586824 8 log.go:172] (0xc00260e9a0) Data frame received for 3 I0123 23:39:36.586908 8 log.go:172] (0xc000960140) (3) Data frame handling I0123 23:39:36.586936 8 log.go:172] (0xc000960140) (3) Data frame sent I0123 23:39:36.739293 8 log.go:172] (0xc00260e9a0) (0xc000960140) Stream removed, broadcasting: 3 I0123 23:39:36.739520 8 log.go:172] (0xc00260e9a0) Data frame received for 1 I0123 23:39:36.739547 8 log.go:172] (0xc00204c1e0) (1) Data frame handling I0123 23:39:36.739564 8 log.go:172] (0xc00204c1e0) (1) Data frame sent I0123 23:39:36.739573 8 log.go:172] (0xc00260e9a0) (0xc00204c1e0) Stream removed, broadcasting: 1 I0123 23:39:36.739758 8 log.go:172] (0xc00260e9a0) (0xc000960280) Stream removed, broadcasting: 5 I0123 23:39:36.739803 8 log.go:172] (0xc00260e9a0) (0xc00204c1e0) Stream removed, broadcasting: 1 I0123 23:39:36.739818 8 log.go:172] (0xc00260e9a0) (0xc000960140) Stream removed, broadcasting: 3 I0123 23:39:36.739829 8 log.go:172] (0xc00260e9a0) (0xc000960280) Stream removed, broadcasting: 5 Jan 23 23:39:36.739: INFO: Exec stderr: "" I0123 23:39:36.739949 8 log.go:172] (0xc00260e9a0) Go away received Jan 23 23:39:36.740: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1845 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 23 23:39:36.740: INFO: >>> kubeConfig: /root/.kube/config I0123 23:39:36.802731 8 log.go:172] (0xc0026d51e0) (0xc000960780) Create stream I0123 23:39:36.802801 8 log.go:172] (0xc0026d51e0) (0xc000960780) Stream added, broadcasting: 1 I0123 23:39:36.808166 8 log.go:172] (0xc0026d51e0) Reply frame received for 1 I0123 23:39:36.808258 8 log.go:172] (0xc0026d51e0) (0xc00204c320) Create stream I0123 23:39:36.808271 8 log.go:172] (0xc0026d51e0) (0xc00204c320) Stream added, broadcasting: 3 I0123 23:39:36.809859 8 log.go:172] (0xc0026d51e0) Reply frame received for 3 I0123 23:39:36.809878 8 log.go:172] (0xc0026d51e0) (0xc0009120a0) Create stream I0123 23:39:36.809886 8 log.go:172] (0xc0026d51e0) (0xc0009120a0) Stream added, broadcasting: 5 I0123 23:39:36.810989 8 log.go:172] (0xc0026d51e0) Reply frame received for 5 I0123 23:39:36.884221 8 log.go:172] (0xc0026d51e0) Data frame received for 3 I0123 23:39:36.884394 8 log.go:172] (0xc00204c320) (3) Data frame handling I0123 23:39:36.884437 8 log.go:172] (0xc00204c320) (3) Data frame sent I0123 23:39:36.975781 8 log.go:172] (0xc0026d51e0) Data frame received for 1 I0123 23:39:36.975858 8 log.go:172] (0xc0026d51e0) (0xc00204c320) Stream removed, broadcasting: 3 I0123 23:39:36.975890 8 log.go:172] (0xc000960780) (1) Data frame handling I0123 23:39:36.975900 8 log.go:172] (0xc000960780) (1) Data frame sent I0123 23:39:36.975962 8 log.go:172] (0xc0026d51e0) (0xc0009120a0) Stream removed, broadcasting: 5 I0123 23:39:36.976007 8 log.go:172] (0xc0026d51e0) (0xc000960780) Stream removed, broadcasting: 1 I0123 23:39:36.976027 8 log.go:172] (0xc0026d51e0) Go away received I0123 23:39:36.976108 8 log.go:172] (0xc0026d51e0) (0xc000960780) Stream removed, broadcasting: 1 I0123 23:39:36.976120 8 log.go:172] (0xc0026d51e0) (0xc00204c320) Stream removed, broadcasting: 3 I0123 23:39:36.976128 8 log.go:172] (0xc0026d51e0) (0xc0009120a0) Stream removed, broadcasting: 5 Jan 23 23:39:36.976: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Jan 23 23:39:36.976: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1845 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 23 23:39:36.976: INFO: >>> kubeConfig: /root/.kube/config I0123 23:39:37.026282 8 log.go:172] (0xc00293a420) (0xc000912820) Create stream I0123 23:39:37.026311 8 log.go:172] (0xc00293a420) (0xc000912820) Stream added, broadcasting: 1 I0123 23:39:37.028232 8 log.go:172] (0xc00293a420) Reply frame received for 1 I0123 23:39:37.028251 8 log.go:172] (0xc00293a420) (0xc000912aa0) Create stream I0123 23:39:37.028257 8 log.go:172] (0xc00293a420) (0xc000912aa0) Stream added, broadcasting: 3 I0123 23:39:37.029093 8 log.go:172] (0xc00293a420) Reply frame received for 3 I0123 23:39:37.029110 8 log.go:172] (0xc00293a420) (0xc000960960) Create stream I0123 23:39:37.029115 8 log.go:172] (0xc00293a420) (0xc000960960) Stream added, broadcasting: 5 I0123 23:39:37.029933 8 log.go:172] (0xc00293a420) Reply frame received for 5 I0123 23:39:37.086920 8 log.go:172] (0xc00293a420) Data frame received for 3 I0123 23:39:37.086952 8 log.go:172] (0xc000912aa0) (3) Data frame handling I0123 23:39:37.086983 8 log.go:172] (0xc000912aa0) (3) Data frame sent I0123 23:39:37.148983 8 log.go:172] (0xc00293a420) Data frame received for 1 I0123 23:39:37.149199 8 log.go:172] (0xc00293a420) (0xc000912aa0) Stream removed, broadcasting: 3 I0123 23:39:37.149229 8 log.go:172] (0xc000912820) (1) Data frame handling I0123 23:39:37.149243 8 log.go:172] (0xc000912820) (1) Data frame sent I0123 23:39:37.149297 8 log.go:172] (0xc00293a420) (0xc000960960) Stream removed, broadcasting: 5 I0123 23:39:37.149345 8 log.go:172] (0xc00293a420) (0xc000912820) Stream removed, broadcasting: 1 I0123 23:39:37.149365 8 log.go:172] (0xc00293a420) Go away received I0123 23:39:37.149450 8 log.go:172] (0xc00293a420) (0xc000912820) Stream removed, broadcasting: 1 I0123 23:39:37.149460 8 log.go:172] (0xc00293a420) (0xc000912aa0) Stream removed, broadcasting: 3 I0123 23:39:37.149464 8 log.go:172] (0xc00293a420) (0xc000960960) Stream removed, broadcasting: 5 Jan 23 23:39:37.149: INFO: Exec stderr: "" Jan 23 23:39:37.149: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1845 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 23 23:39:37.149: INFO: >>> kubeConfig: /root/.kube/config I0123 23:39:37.181436 8 log.go:172] (0xc002cce630) (0xc000960e60) Create stream I0123 23:39:37.181511 8 log.go:172] (0xc002cce630) (0xc000960e60) Stream added, broadcasting: 1 I0123 23:39:37.184284 8 log.go:172] (0xc002cce630) Reply frame received for 1 I0123 23:39:37.184309 8 log.go:172] (0xc002cce630) (0xc00204c3c0) Create stream I0123 23:39:37.184314 8 log.go:172] (0xc002cce630) (0xc00204c3c0) Stream added, broadcasting: 3 I0123 23:39:37.185366 8 log.go:172] (0xc002cce630) Reply frame received for 3 I0123 23:39:37.185386 8 log.go:172] (0xc002cce630) (0xc000912fa0) Create stream I0123 23:39:37.185394 8 log.go:172] (0xc002cce630) (0xc000912fa0) Stream added, broadcasting: 5 I0123 23:39:37.186510 8 log.go:172] (0xc002cce630) Reply frame received for 5 I0123 23:39:37.243706 8 log.go:172] (0xc002cce630) Data frame received for 3 I0123 23:39:37.243764 8 log.go:172] (0xc00204c3c0) (3) Data frame handling I0123 23:39:37.243778 8 log.go:172] (0xc00204c3c0) (3) Data frame sent I0123 23:39:37.323697 8 log.go:172] (0xc002cce630) (0xc00204c3c0) Stream removed, broadcasting: 3 I0123 23:39:37.323806 8 log.go:172] (0xc002cce630) Data frame received for 1 I0123 23:39:37.323824 8 log.go:172] (0xc000960e60) (1) Data frame handling I0123 23:39:37.323840 8 log.go:172] (0xc000960e60) (1) Data frame sent I0123 23:39:37.323851 8 log.go:172] (0xc002cce630) (0xc000912fa0) Stream removed, broadcasting: 5 I0123 23:39:37.323870 8 log.go:172] (0xc002cce630) (0xc000960e60) Stream removed, broadcasting: 1 I0123 23:39:37.323882 8 log.go:172] (0xc002cce630) Go away received I0123 23:39:37.324020 8 log.go:172] (0xc002cce630) (0xc000960e60) Stream removed, broadcasting: 1 I0123 23:39:37.324034 8 log.go:172] (0xc002cce630) (0xc00204c3c0) Stream removed, broadcasting: 3 I0123 23:39:37.324051 8 log.go:172] (0xc002cce630) (0xc000912fa0) Stream removed, broadcasting: 5 Jan 23 23:39:37.324: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Jan 23 23:39:37.324: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1845 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 23 23:39:37.324: INFO: >>> kubeConfig: /root/.kube/config I0123 23:39:37.361036 8 log.go:172] (0xc002ccec60) (0xc000961220) Create stream I0123 23:39:37.361141 8 log.go:172] (0xc002ccec60) (0xc000961220) Stream added, broadcasting: 1 I0123 23:39:37.373860 8 log.go:172] (0xc002ccec60) Reply frame received for 1 I0123 23:39:37.373958 8 log.go:172] (0xc002ccec60) (0xc0008f4000) Create stream I0123 23:39:37.373964 8 log.go:172] (0xc002ccec60) (0xc0008f4000) Stream added, broadcasting: 3 I0123 23:39:37.375586 8 log.go:172] (0xc002ccec60) Reply frame received for 3 I0123 23:39:37.375616 8 log.go:172] (0xc002ccec60) (0xc000980820) Create stream I0123 23:39:37.375628 8 log.go:172] (0xc002ccec60) (0xc000980820) Stream added, broadcasting: 5 I0123 23:39:37.376649 8 log.go:172] (0xc002ccec60) Reply frame received for 5 I0123 23:39:37.448648 8 log.go:172] (0xc002ccec60) Data frame received for 3 I0123 23:39:37.448698 8 log.go:172] (0xc0008f4000) (3) Data frame handling I0123 23:39:37.448717 8 log.go:172] (0xc0008f4000) (3) Data frame sent I0123 23:39:37.513813 8 log.go:172] (0xc002ccec60) Data frame received for 1 I0123 23:39:37.513943 8 log.go:172] (0xc000961220) (1) Data frame handling I0123 23:39:37.513973 8 log.go:172] (0xc000961220) (1) Data frame sent I0123 23:39:37.514882 8 log.go:172] (0xc002ccec60) (0xc000961220) Stream removed, broadcasting: 1 I0123 23:39:37.514996 8 log.go:172] (0xc002ccec60) (0xc0008f4000) Stream removed, broadcasting: 3 I0123 23:39:37.515032 8 log.go:172] (0xc002ccec60) (0xc000980820) Stream removed, broadcasting: 5 I0123 23:39:37.515052 8 log.go:172] (0xc002ccec60) Go away received I0123 23:39:37.515101 8 log.go:172] (0xc002ccec60) (0xc000961220) Stream removed, broadcasting: 1 I0123 23:39:37.515130 8 log.go:172] (0xc002ccec60) (0xc0008f4000) Stream removed, broadcasting: 3 I0123 23:39:37.515153 8 log.go:172] (0xc002ccec60) (0xc000980820) Stream removed, broadcasting: 5 Jan 23 23:39:37.515: INFO: Exec stderr: "" Jan 23 23:39:37.515: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1845 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 23 23:39:37.515: INFO: >>> kubeConfig: /root/.kube/config I0123 23:39:37.564748 8 log.go:172] (0xc002d24790) (0xc0009814a0) Create stream I0123 23:39:37.564835 8 log.go:172] (0xc002d24790) (0xc0009814a0) Stream added, broadcasting: 1 I0123 23:39:37.568215 8 log.go:172] (0xc002d24790) Reply frame received for 1 I0123 23:39:37.568252 8 log.go:172] (0xc002d24790) (0xc000913220) Create stream I0123 23:39:37.568265 8 log.go:172] (0xc002d24790) (0xc000913220) Stream added, broadcasting: 3 I0123 23:39:37.569759 8 log.go:172] (0xc002d24790) Reply frame received for 3 I0123 23:39:37.569790 8 log.go:172] (0xc002d24790) (0xc00204c460) Create stream I0123 23:39:37.569805 8 log.go:172] (0xc002d24790) (0xc00204c460) Stream added, broadcasting: 5 I0123 23:39:37.572277 8 log.go:172] (0xc002d24790) Reply frame received for 5 I0123 23:39:37.644977 8 log.go:172] (0xc002d24790) Data frame received for 3 I0123 23:39:37.645011 8 log.go:172] (0xc000913220) (3) Data frame handling I0123 23:39:37.645043 8 log.go:172] (0xc000913220) (3) Data frame sent I0123 23:39:37.718278 8 log.go:172] (0xc002d24790) Data frame received for 1 I0123 23:39:37.718332 8 log.go:172] (0xc0009814a0) (1) Data frame handling I0123 23:39:37.718349 8 log.go:172] (0xc0009814a0) (1) Data frame sent I0123 23:39:37.718367 8 log.go:172] (0xc002d24790) (0xc0009814a0) Stream removed, broadcasting: 1 I0123 23:39:37.718647 8 log.go:172] (0xc002d24790) (0xc000913220) Stream removed, broadcasting: 3 I0123 23:39:37.719665 8 log.go:172] (0xc002d24790) (0xc00204c460) Stream removed, broadcasting: 5 I0123 23:39:37.719710 8 log.go:172] (0xc002d24790) (0xc0009814a0) Stream removed, broadcasting: 1 I0123 23:39:37.719722 8 log.go:172] (0xc002d24790) (0xc000913220) Stream removed, broadcasting: 3 I0123 23:39:37.719733 8 log.go:172] (0xc002d24790) (0xc00204c460) Stream removed, broadcasting: 5 Jan 23 23:39:37.719: INFO: Exec stderr: "" I0123 23:39:37.719881 8 log.go:172] (0xc002d24790) Go away received Jan 23 23:39:37.720: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1845 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 23 23:39:37.720: INFO: >>> kubeConfig: /root/.kube/config I0123 23:39:37.763806 8 log.go:172] (0xc00260f600) (0xc00204c640) Create stream I0123 23:39:37.763881 8 log.go:172] (0xc00260f600) (0xc00204c640) Stream added, broadcasting: 1 I0123 23:39:37.766526 8 log.go:172] (0xc00260f600) Reply frame received for 1 I0123 23:39:37.766567 8 log.go:172] (0xc00260f600) (0xc0009815e0) Create stream I0123 23:39:37.766574 8 log.go:172] (0xc00260f600) (0xc0009815e0) Stream added, broadcasting: 3 I0123 23:39:37.767410 8 log.go:172] (0xc00260f600) Reply frame received for 3 I0123 23:39:37.767433 8 log.go:172] (0xc00260f600) (0xc0009817c0) Create stream I0123 23:39:37.767442 8 log.go:172] (0xc00260f600) (0xc0009817c0) Stream added, broadcasting: 5 I0123 23:39:37.768296 8 log.go:172] (0xc00260f600) Reply frame received for 5 I0123 23:39:37.852740 8 log.go:172] (0xc00260f600) Data frame received for 3 I0123 23:39:37.852799 8 log.go:172] (0xc0009815e0) (3) Data frame handling I0123 23:39:37.852821 8 log.go:172] (0xc0009815e0) (3) Data frame sent I0123 23:39:37.957280 8 log.go:172] (0xc00260f600) Data frame received for 1 I0123 23:39:37.957349 8 log.go:172] (0xc00204c640) (1) Data frame handling I0123 23:39:37.957368 8 log.go:172] (0xc00204c640) (1) Data frame sent I0123 23:39:37.958100 8 log.go:172] (0xc00260f600) (0xc00204c640) Stream removed, broadcasting: 1 I0123 23:39:37.958685 8 log.go:172] (0xc00260f600) (0xc0009815e0) Stream removed, broadcasting: 3 I0123 23:39:37.959378 8 log.go:172] (0xc00260f600) (0xc0009817c0) Stream removed, broadcasting: 5 I0123 23:39:37.959405 8 log.go:172] (0xc00260f600) (0xc00204c640) Stream removed, broadcasting: 1 I0123 23:39:37.959419 8 log.go:172] (0xc00260f600) (0xc0009815e0) Stream removed, broadcasting: 3 I0123 23:39:37.959431 8 log.go:172] (0xc00260f600) (0xc0009817c0) Stream removed, broadcasting: 5 Jan 23 23:39:37.959: INFO: Exec stderr: "" Jan 23 23:39:37.959: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1845 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 23 23:39:37.959: INFO: >>> kubeConfig: /root/.kube/config I0123 23:39:38.000799 8 log.go:172] (0xc002d24d10) (0xc000981900) Create stream I0123 23:39:38.000846 8 log.go:172] (0xc002d24d10) (0xc000981900) Stream added, broadcasting: 1 I0123 23:39:38.003148 8 log.go:172] (0xc002d24d10) Reply frame received for 1 I0123 23:39:38.003186 8 log.go:172] (0xc002d24d10) (0xc0009134a0) Create stream I0123 23:39:38.003198 8 log.go:172] (0xc002d24d10) (0xc0009134a0) Stream added, broadcasting: 3 I0123 23:39:38.004172 8 log.go:172] (0xc002d24d10) Reply frame received for 3 I0123 23:39:38.004192 8 log.go:172] (0xc002d24d10) (0xc00204c820) Create stream I0123 23:39:38.004200 8 log.go:172] (0xc002d24d10) (0xc00204c820) Stream added, broadcasting: 5 I0123 23:39:38.005885 8 log.go:172] (0xc002d24d10) Reply frame received for 5 I0123 23:39:38.061227 8 log.go:172] (0xc002d24d10) Data frame received for 3 I0123 23:39:38.061267 8 log.go:172] (0xc0009134a0) (3) Data frame handling I0123 23:39:38.061284 8 log.go:172] (0xc0009134a0) (3) Data frame sent I0123 23:39:38.130306 8 log.go:172] (0xc002d24d10) Data frame received for 1 I0123 23:39:38.130351 8 log.go:172] (0xc000981900) (1) Data frame handling I0123 23:39:38.130372 8 log.go:172] (0xc000981900) (1) Data frame sent I0123 23:39:38.130392 8 log.go:172] (0xc002d24d10) (0xc000981900) Stream removed, broadcasting: 1 I0123 23:39:38.130456 8 log.go:172] (0xc002d24d10) (0xc0009134a0) Stream removed, broadcasting: 3 I0123 23:39:38.131118 8 log.go:172] (0xc002d24d10) (0xc00204c820) Stream removed, broadcasting: 5 I0123 23:39:38.131174 8 log.go:172] (0xc002d24d10) (0xc000981900) Stream removed, broadcasting: 1 I0123 23:39:38.131196 8 log.go:172] (0xc002d24d10) (0xc0009134a0) Stream removed, broadcasting: 3 I0123 23:39:38.131216 8 log.go:172] (0xc002d24d10) (0xc00204c820) Stream removed, broadcasting: 5 I0123 23:39:38.131308 8 log.go:172] (0xc002d24d10) Go away received Jan 23 23:39:38.131: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 23 23:39:38.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-1845" for this suite. • [SLOW TEST:22.433 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":1,"skipped":8,"failed":0} S ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 23 23:39:38.145: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test emptydir 0644 on node default medium Jan 23 23:39:38.234: INFO: Waiting up to 5m0s for pod "pod-0dd99782-e550-4a31-b35c-4d4ee9a34f15" in namespace "emptydir-3207" to be "success or failure" Jan 23 23:39:38.284: INFO: Pod "pod-0dd99782-e550-4a31-b35c-4d4ee9a34f15": Phase="Pending", Reason="", readiness=false. Elapsed: 49.905624ms Jan 23 23:39:40.292: INFO: Pod "pod-0dd99782-e550-4a31-b35c-4d4ee9a34f15": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057220343s Jan 23 23:39:42.299: INFO: Pod "pod-0dd99782-e550-4a31-b35c-4d4ee9a34f15": Phase="Pending", Reason="", readiness=false. Elapsed: 4.064804029s Jan 23 23:39:44.710: INFO: Pod "pod-0dd99782-e550-4a31-b35c-4d4ee9a34f15": Phase="Pending", Reason="", readiness=false. Elapsed: 6.476053204s Jan 23 23:39:46.723: INFO: Pod "pod-0dd99782-e550-4a31-b35c-4d4ee9a34f15": Phase="Pending", Reason="", readiness=false. Elapsed: 8.488600137s Jan 23 23:39:50.045: INFO: Pod "pod-0dd99782-e550-4a31-b35c-4d4ee9a34f15": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.810290113s STEP: Saw pod success Jan 23 23:39:50.045: INFO: Pod "pod-0dd99782-e550-4a31-b35c-4d4ee9a34f15" satisfied condition "success or failure" Jan 23 23:39:50.051: INFO: Trying to get logs from node jerma-server-mvvl6gufaqub pod pod-0dd99782-e550-4a31-b35c-4d4ee9a34f15 container test-container: STEP: delete the pod Jan 23 23:39:50.198: INFO: Waiting for pod pod-0dd99782-e550-4a31-b35c-4d4ee9a34f15 to disappear Jan 23 23:39:50.215: INFO: Pod pod-0dd99782-e550-4a31-b35c-4d4ee9a34f15 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 23 23:39:50.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3207" for this suite. • [SLOW TEST:12.085 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":2,"skipped":9,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 23 23:39:50.230: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating projection with secret that has name projected-secret-test-7080dff8-7a88-4b5f-bc16-c19b930b1a4b STEP: Creating a pod to test consume secrets Jan 23 23:39:50.375: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e9385bb9-81e0-49f4-807e-9bcbb90e200a" in namespace "projected-1113" to be "success or failure" Jan 23 23:39:50.388: INFO: Pod "pod-projected-secrets-e9385bb9-81e0-49f4-807e-9bcbb90e200a": Phase="Pending", Reason="", readiness=false. Elapsed: 13.32077ms Jan 23 23:39:52.504: INFO: Pod "pod-projected-secrets-e9385bb9-81e0-49f4-807e-9bcbb90e200a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.129736856s Jan 23 23:39:54.722: INFO: Pod "pod-projected-secrets-e9385bb9-81e0-49f4-807e-9bcbb90e200a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.346881087s Jan 23 23:39:56.743: INFO: Pod "pod-projected-secrets-e9385bb9-81e0-49f4-807e-9bcbb90e200a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.367852718s Jan 23 23:39:58.749: INFO: Pod "pod-projected-secrets-e9385bb9-81e0-49f4-807e-9bcbb90e200a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.373919282s STEP: Saw pod success Jan 23 23:39:58.749: INFO: Pod "pod-projected-secrets-e9385bb9-81e0-49f4-807e-9bcbb90e200a" satisfied condition "success or failure" Jan 23 23:39:58.752: INFO: Trying to get logs from node jerma-server-mvvl6gufaqub pod pod-projected-secrets-e9385bb9-81e0-49f4-807e-9bcbb90e200a container projected-secret-volume-test: STEP: delete the pod Jan 23 23:39:58.928: INFO: Waiting for pod pod-projected-secrets-e9385bb9-81e0-49f4-807e-9bcbb90e200a to disappear Jan 23 23:39:59.234: INFO: Pod pod-projected-secrets-e9385bb9-81e0-49f4-807e-9bcbb90e200a no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 23 23:39:59.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1113" for this suite. • [SLOW TEST:9.017 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":3,"skipped":17,"failed":0} SSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 23 23:39:59.249: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Jan 23 23:39:59.410: INFO: Pod name pod-release: Found 0 pods out of 1 Jan 23 23:40:04.547: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 23 23:40:04.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-8407" for this suite. • [SLOW TEST:6.292 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":278,"completed":4,"skipped":28,"failed":0} SSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 23 23:40:05.542: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:73 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Jan 23 23:40:05.929: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Jan 23 23:40:10.981: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jan 23 23:40:20.991: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:67 Jan 23 23:40:21.069: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-7280 /apis/apps/v1/namespaces/deployment-7280/deployments/test-cleanup-deployment 2a667f78-b356-4f03-97fc-3e1306fe50ac 3902611 1 2020-01-23 23:40:20 +0000 UTC map[name:cleanup-pod] map[] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0027a8c98 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} Jan 23 23:40:21.084: INFO: New ReplicaSet "test-cleanup-deployment-55ffc6b7b6" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6 deployment-7280 /apis/apps/v1/namespaces/deployment-7280/replicasets/test-cleanup-deployment-55ffc6b7b6 5587d072-1d17-4a4f-981e-83b21d318b03 3902615 1 2020-01-23 23:40:21 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 2a667f78-b356-4f03-97fc-3e1306fe50ac 0xc0027a9157 0xc0027a9158}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55ffc6b7b6,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0027a91c8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 23 23:40:21.084: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Jan 23 23:40:21.084: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-7280 /apis/apps/v1/namespaces/deployment-7280/replicasets/test-cleanup-controller 90a4d9b1-a75b-4285-8e5f-c4be02c443fc 3902614 1 2020-01-23 23:40:05 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment 2a667f78-b356-4f03-97fc-3e1306fe50ac 0xc0027a906f 0xc0027a9080}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0027a90e8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jan 23 23:40:21.159: INFO: Pod "test-cleanup-controller-vbpcl" is available: &Pod{ObjectMeta:{test-cleanup-controller-vbpcl test-cleanup-controller- deployment-7280 /api/v1/namespaces/deployment-7280/pods/test-cleanup-controller-vbpcl 2b0b814e-ac7e-4eea-a7fe-ff55ae1900e5 3902607 0 2020-01-23 23:40:05 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller 90a4d9b1-a75b-4285-8e5f-c4be02c443fc 0xc0027a9707 0xc0027a9708}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8ffvk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8ffvk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8ffvk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 23:40:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 23:40:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 23:40:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-23 23:40:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:10.32.0.4,StartTime:2020-01-23 23:40:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-23 23:40:19 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://f566e22cb6e42b3c603f253393edfa9e40b1e6bd5eecc10dbe8a7c49b2ff6f40,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.4,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 23 23:40:21.159: INFO: Pod "test-cleanup-deployment-55ffc6b7b6-smd9w" is not available: &Pod{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6-smd9w test-cleanup-deployment-55ffc6b7b6- deployment-7280 /api/v1/namespaces/deployment-7280/pods/test-cleanup-deployment-55ffc6b7b6-smd9w 885dbe64-66f8-48e2-b6b5-692d7f055646 3902617 0 2020-01-23 23:40:21 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-55ffc6b7b6 5587d072-1d17-4a4f-981e-83b21d318b03 0xc0027a9877 0xc0027a9878}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8ffvk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8ffvk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8ffvk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 23 23:40:21.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7280" for this suite. • [SLOW TEST:15.698 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":278,"completed":5,"skipped":36,"failed":0} SS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 23 23:40:21.240: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 23 23:40:39.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7008" for this suite. • [SLOW TEST:18.320 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":278,"completed":6,"skipped":38,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 23 23:40:39.561: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name configmap-projected-all-test-volume-db971a9f-f7c1-4984-b3e8-bebde9453c7c STEP: Creating secret with name secret-projected-all-test-volume-db0026e5-2b01-49d3-bbf0-e8bfe2120044 STEP: Creating a pod to test Check all projections for projected volume plugin Jan 23 23:40:39.755: INFO: Waiting up to 5m0s for pod "projected-volume-4da9ddab-8a68-4d72-9ce2-531e90f7a748" in namespace "projected-1579" to be "success or failure" Jan 23 23:40:39.773: INFO: Pod "projected-volume-4da9ddab-8a68-4d72-9ce2-531e90f7a748": Phase="Pending", Reason="", readiness=false. Elapsed: 18.138945ms Jan 23 23:40:41.781: INFO: Pod "projected-volume-4da9ddab-8a68-4d72-9ce2-531e90f7a748": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026457086s Jan 23 23:40:43.791: INFO: Pod "projected-volume-4da9ddab-8a68-4d72-9ce2-531e90f7a748": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035959821s Jan 23 23:40:45.811: INFO: Pod "projected-volume-4da9ddab-8a68-4d72-9ce2-531e90f7a748": Phase="Pending", Reason="", readiness=false. Elapsed: 6.056033511s Jan 23 23:40:47.822: INFO: Pod "projected-volume-4da9ddab-8a68-4d72-9ce2-531e90f7a748": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.06752553s STEP: Saw pod success Jan 23 23:40:47.823: INFO: Pod "projected-volume-4da9ddab-8a68-4d72-9ce2-531e90f7a748" satisfied condition "success or failure" Jan 23 23:40:47.829: INFO: Trying to get logs from node jerma-node pod projected-volume-4da9ddab-8a68-4d72-9ce2-531e90f7a748 container projected-all-volume-test: STEP: delete the pod Jan 23 23:40:48.166: INFO: Waiting for pod projected-volume-4da9ddab-8a68-4d72-9ce2-531e90f7a748 to disappear Jan 23 23:40:48.178: INFO: Pod projected-volume-4da9ddab-8a68-4d72-9ce2-531e90f7a748 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 23 23:40:48.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1579" for this suite. • [SLOW TEST:8.670 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":278,"completed":7,"skipped":60,"failed":0} SSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 23 23:40:48.232: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 23 23:40:49.023: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 23 23:40:51.044: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715419649, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715419649, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715419649, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715419648, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 23 23:40:53.055: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715419649, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715419649, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715419649, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715419648, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 23 23:40:55.055: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715419649, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715419649, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715419649, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715419648, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 23 23:40:58.103: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Jan 23 23:40:58.109: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-6543-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 23 23:40:59.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4305" for this suite. STEP: Destroying namespace "webhook-4305-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:11.645 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":278,"completed":8,"skipped":63,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 23 23:40:59.877: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Jan 23 23:40:59.991: INFO: Waiting up to 5m0s for pod "downwardapi-volume-375422c2-8de9-464e-a263-ea2cc67ec791" in namespace "downward-api-8920" to be "success or failure" Jan 23 23:41:00.017: INFO: Pod "downwardapi-volume-375422c2-8de9-464e-a263-ea2cc67ec791": Phase="Pending", Reason="", readiness=false. Elapsed: 25.824359ms Jan 23 23:41:02.640: INFO: Pod "downwardapi-volume-375422c2-8de9-464e-a263-ea2cc67ec791": Phase="Pending", Reason="", readiness=false. Elapsed: 2.649207676s Jan 23 23:41:04.647: INFO: Pod "downwardapi-volume-375422c2-8de9-464e-a263-ea2cc67ec791": Phase="Pending", Reason="", readiness=false. Elapsed: 4.655850843s Jan 23 23:41:06.653: INFO: Pod "downwardapi-volume-375422c2-8de9-464e-a263-ea2cc67ec791": Phase="Pending", Reason="", readiness=false. Elapsed: 6.662330573s Jan 23 23:41:08.662: INFO: Pod "downwardapi-volume-375422c2-8de9-464e-a263-ea2cc67ec791": Phase="Pending", Reason="", readiness=false. Elapsed: 8.670585374s Jan 23 23:41:10.666: INFO: Pod "downwardapi-volume-375422c2-8de9-464e-a263-ea2cc67ec791": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.675206067s STEP: Saw pod success Jan 23 23:41:10.666: INFO: Pod "downwardapi-volume-375422c2-8de9-464e-a263-ea2cc67ec791" satisfied condition "success or failure" Jan 23 23:41:10.668: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-375422c2-8de9-464e-a263-ea2cc67ec791 container client-container: STEP: delete the pod Jan 23 23:41:11.356: INFO: Waiting for pod downwardapi-volume-375422c2-8de9-464e-a263-ea2cc67ec791 to disappear Jan 23 23:41:11.495: INFO: Pod downwardapi-volume-375422c2-8de9-464e-a263-ea2cc67ec791 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 23 23:41:11.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8920" for this suite. • [SLOW TEST:11.643 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":9,"skipped":86,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 23 23:41:11.522: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 23 23:41:19.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-9324" for this suite. • [SLOW TEST:8.230 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187 should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":10,"skipped":133,"failed":0} [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 23 23:41:19.752: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-2327, will wait for the garbage collector to delete the pods Jan 23 23:41:27.963: INFO: Deleting Job.batch foo took: 12.778147ms Jan 23 23:41:28.263: INFO: Terminating Job.batch foo pods took: 300.359706ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 23 23:42:04.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-2327" for this suite. • [SLOW TEST:44.843 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":278,"completed":11,"skipped":133,"failed":0} SSSS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 23 23:42:04.596: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:279 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating Agnhost RC Jan 23 23:42:04.659: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8551' Jan 23 23:42:06.736: INFO: stderr: "" Jan 23 23:42:06.736: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Jan 23 23:42:07.742: INFO: Selector matched 1 pods for map[app:agnhost] Jan 23 23:42:07.743: INFO: Found 0 / 1 Jan 23 23:42:08.744: INFO: Selector matched 1 pods for map[app:agnhost] Jan 23 23:42:08.744: INFO: Found 0 / 1 Jan 23 23:42:09.742: INFO: Selector matched 1 pods for map[app:agnhost] Jan 23 23:42:09.742: INFO: Found 0 / 1 Jan 23 23:42:10.743: INFO: Selector matched 1 pods for map[app:agnhost] Jan 23 23:42:10.743: INFO: Found 0 / 1 Jan 23 23:42:11.742: INFO: Selector matched 1 pods for map[app:agnhost] Jan 23 23:42:11.742: INFO: Found 0 / 1 Jan 23 23:42:12.775: INFO: Selector matched 1 pods for map[app:agnhost] Jan 23 23:42:12.775: INFO: Found 0 / 1 Jan 23 23:42:13.743: INFO: Selector matched 1 pods for map[app:agnhost] Jan 23 23:42:13.743: INFO: Found 0 / 1 Jan 23 23:42:14.986: INFO: Selector matched 1 pods for map[app:agnhost] Jan 23 23:42:14.986: INFO: Found 1 / 1 Jan 23 23:42:14.986: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Jan 23 23:42:14.989: INFO: Selector matched 1 pods for map[app:agnhost] Jan 23 23:42:14.989: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jan 23 23:42:14.989: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod agnhost-master-7qrxr --namespace=kubectl-8551 -p {"metadata":{"annotations":{"x":"y"}}}' Jan 23 23:42:15.167: INFO: stderr: "" Jan 23 23:42:15.167: INFO: stdout: "pod/agnhost-master-7qrxr patched\n" STEP: checking annotations Jan 23 23:42:15.172: INFO: Selector matched 1 pods for map[app:agnhost] Jan 23 23:42:15.172: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 23 23:42:15.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8551" for this suite. • [SLOW TEST:10.587 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1540 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":278,"completed":12,"skipped":137,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 23 23:42:15.183: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:279 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: executing a command with run --rm and attach with stdin Jan 23 23:42:15.227: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3047 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Jan 23 23:42:24.226: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0123 23:42:23.051195 69 log.go:172] (0xc0000f6e70) (0xc0006c9b80) Create stream\nI0123 23:42:23.051268 69 log.go:172] (0xc0000f6e70) (0xc0006c9b80) Stream added, broadcasting: 1\nI0123 23:42:23.055262 69 log.go:172] (0xc0000f6e70) Reply frame received for 1\nI0123 23:42:23.055304 69 log.go:172] (0xc0000f6e70) (0xc0007e6000) Create stream\nI0123 23:42:23.055321 69 log.go:172] (0xc0000f6e70) (0xc0007e6000) Stream added, broadcasting: 3\nI0123 23:42:23.057017 69 log.go:172] (0xc0000f6e70) Reply frame received for 3\nI0123 23:42:23.057059 69 log.go:172] (0xc0000f6e70) (0xc0006c9c20) Create stream\nI0123 23:42:23.057074 69 log.go:172] (0xc0000f6e70) (0xc0006c9c20) Stream added, broadcasting: 5\nI0123 23:42:23.059412 69 log.go:172] (0xc0000f6e70) Reply frame received for 5\nI0123 23:42:23.059445 69 log.go:172] (0xc0000f6e70) (0xc0006c9cc0) Create stream\nI0123 23:42:23.059455 69 log.go:172] (0xc0000f6e70) (0xc0006c9cc0) Stream added, broadcasting: 7\nI0123 23:42:23.060792 69 log.go:172] (0xc0000f6e70) Reply frame received for 7\nI0123 23:42:23.061009 69 log.go:172] (0xc0007e6000) (3) Writing data frame\nI0123 23:42:23.061298 69 log.go:172] (0xc0007e6000) (3) Writing data frame\nI0123 23:42:23.066225 69 log.go:172] (0xc0000f6e70) Data frame received for 5\nI0123 23:42:23.066251 69 log.go:172] (0xc0006c9c20) (5) Data frame handling\nI0123 23:42:23.066291 69 log.go:172] (0xc0006c9c20) (5) Data frame sent\nI0123 23:42:23.070669 69 log.go:172] (0xc0000f6e70) Data frame received for 5\nI0123 23:42:23.070729 69 log.go:172] (0xc0006c9c20) (5) Data frame handling\nI0123 23:42:23.070752 69 log.go:172] (0xc0006c9c20) (5) Data frame sent\nI0123 23:42:24.193487 69 log.go:172] (0xc0000f6e70) Data frame received for 1\nI0123 23:42:24.193533 69 log.go:172] (0xc0006c9b80) (1) Data frame handling\nI0123 23:42:24.193539 69 log.go:172] (0xc0006c9b80) (1) Data frame sent\nI0123 23:42:24.193548 69 log.go:172] (0xc0000f6e70) (0xc0006c9b80) Stream removed, broadcasting: 1\nI0123 23:42:24.193562 69 log.go:172] (0xc0000f6e70) (0xc0007e6000) Stream removed, broadcasting: 3\nI0123 23:42:24.193578 69 log.go:172] (0xc0000f6e70) (0xc0006c9c20) Stream removed, broadcasting: 5\nI0123 23:42:24.193586 69 log.go:172] (0xc0000f6e70) (0xc0006c9cc0) Stream removed, broadcasting: 7\nI0123 23:42:24.193884 69 log.go:172] (0xc0000f6e70) (0xc0006c9b80) Stream removed, broadcasting: 1\nI0123 23:42:24.193901 69 log.go:172] (0xc0000f6e70) (0xc0007e6000) Stream removed, broadcasting: 3\nI0123 23:42:24.193907 69 log.go:172] (0xc0000f6e70) (0xc0006c9c20) Stream removed, broadcasting: 5\nI0123 23:42:24.193912 69 log.go:172] (0xc0000f6e70) (0xc0006c9cc0) Stream removed, broadcasting: 7\nI0123 23:42:24.194210 69 log.go:172] (0xc0000f6e70) Go away received\n" Jan 23 23:42:24.226: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 23 23:42:26.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3047" for this suite. • [SLOW TEST:11.067 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1945 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance]","total":278,"completed":13,"skipped":165,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 23 23:42:26.251: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:687 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-2247 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-2247 STEP: creating replication controller externalsvc in namespace services-2247 I0123 23:42:26.505615 8 runners.go:189] Created replication controller with name: externalsvc, namespace: services-2247, replica count: 2 I0123 23:42:29.556847 8 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0123 23:42:32.557556 8 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0123 23:42:35.558104 8 runners.go:189] externalsvc Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0123 23:42:38.559358 8 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Jan 23 23:42:38.663: INFO: Creating new exec pod Jan 23 23:42:46.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-2247 execpod5lxqn -- /bin/sh -x -c nslookup clusterip-service' Jan 23 23:42:47.141: INFO: stderr: "I0123 23:42:46.906404 91 log.go:172] (0xc000b596b0) (0xc000a2e820) Create stream\nI0123 23:42:46.906617 91 log.go:172] (0xc000b596b0) (0xc000a2e820) Stream added, broadcasting: 1\nI0123 23:42:46.913307 91 log.go:172] (0xc000b596b0) Reply frame received for 1\nI0123 23:42:46.913372 91 log.go:172] (0xc000b596b0) (0xc0007d1c20) Create stream\nI0123 23:42:46.913384 91 log.go:172] (0xc000b596b0) (0xc0007d1c20) Stream added, broadcasting: 3\nI0123 23:42:46.918284 91 log.go:172] (0xc000b596b0) Reply frame received for 3\nI0123 23:42:46.918334 91 log.go:172] (0xc000b596b0) (0xc0006ce820) Create stream\nI0123 23:42:46.918344 91 log.go:172] (0xc000b596b0) (0xc0006ce820) Stream added, broadcasting: 5\nI0123 23:42:46.920542 91 log.go:172] (0xc000b596b0) Reply frame received for 5\nI0123 23:42:47.027961 91 log.go:172] (0xc000b596b0) Data frame received for 5\nI0123 23:42:47.028032 91 log.go:172] (0xc0006ce820) (5) Data frame handling\nI0123 23:42:47.028066 91 log.go:172] (0xc0006ce820) (5) Data frame sent\nI0123 23:42:47.028077 91 log.go:172] (0xc000b596b0) Data frame received for 5\nI0123 23:42:47.028094 91 log.go:172] (0xc0006ce820) (5) Data frame handling\n+ nslookup clusterip-service\nI0123 23:42:47.028163 91 log.go:172] (0xc0006ce820) (5) Data frame sent\nI0123 23:42:47.045517 91 log.go:172] (0xc000b596b0) Data frame received for 3\nI0123 23:42:47.045579 91 log.go:172] (0xc0007d1c20) (3) Data frame handling\nI0123 23:42:47.045628 91 log.go:172] (0xc0007d1c20) (3) Data frame sent\nI0123 23:42:47.049796 91 log.go:172] (0xc000b596b0) Data frame received for 3\nI0123 23:42:47.049823 91 log.go:172] (0xc0007d1c20) (3) Data frame handling\nI0123 23:42:47.049847 91 log.go:172] (0xc0007d1c20) (3) Data frame sent\nI0123 23:42:47.133498 91 log.go:172] (0xc000b596b0) Data frame received for 1\nI0123 23:42:47.133550 91 log.go:172] (0xc000b596b0) (0xc0006ce820) Stream removed, broadcasting: 5\nI0123 23:42:47.133590 91 log.go:172] (0xc000a2e820) (1) Data frame handling\nI0123 23:42:47.133607 91 log.go:172] (0xc000a2e820) (1) Data frame sent\nI0123 23:42:47.133619 91 log.go:172] (0xc000b596b0) (0xc0007d1c20) Stream removed, broadcasting: 3\nI0123 23:42:47.133639 91 log.go:172] (0xc000b596b0) (0xc000a2e820) Stream removed, broadcasting: 1\nI0123 23:42:47.133647 91 log.go:172] (0xc000b596b0) Go away received\nI0123 23:42:47.134103 91 log.go:172] (0xc000b596b0) (0xc000a2e820) Stream removed, broadcasting: 1\nI0123 23:42:47.134119 91 log.go:172] (0xc000b596b0) (0xc0007d1c20) Stream removed, broadcasting: 3\nI0123 23:42:47.134134 91 log.go:172] (0xc000b596b0) (0xc0006ce820) Stream removed, broadcasting: 5\n" Jan 23 23:42:47.141: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-2247.svc.cluster.local\tcanonical name = externalsvc.services-2247.svc.cluster.local.\nName:\texternalsvc.services-2247.svc.cluster.local\nAddress: 10.96.53.130\n\n" STEP: deleting ReplicationController externalsvc in namespace services-2247, will wait for the garbage collector to delete the pods Jan 23 23:42:47.204: INFO: Deleting ReplicationController externalsvc took: 5.841332ms Jan 23 23:42:47.504: INFO: Terminating ReplicationController externalsvc pods took: 300.365776ms Jan 23 23:43:03.226: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 23 23:43:03.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2247" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 • [SLOW TEST:37.056 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":278,"completed":14,"skipped":187,"failed":0} SSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 23 23:43:03.308: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Jan 23 23:43:15.559: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-9454 PodName:pod-sharedvolume-abd8cba6-c6f8-41f8-96cc-02e4b6db75d3 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 23 23:43:15.559: INFO: >>> kubeConfig: /root/.kube/config I0123 23:43:15.619215 8 log.go:172] (0xc002cce210) (0xc0003e5540) Create stream I0123 23:43:15.619264 8 log.go:172] (0xc002cce210) (0xc0003e5540) Stream added, broadcasting: 1 I0123 23:43:15.623130 8 log.go:172] (0xc002cce210) Reply frame received for 1 I0123 23:43:15.623205 8 log.go:172] (0xc002cce210) (0xc0011db9a0) Create stream I0123 23:43:15.623230 8 log.go:172] (0xc002cce210) (0xc0011db9a0) Stream added, broadcasting: 3 I0123 23:43:15.626408 8 log.go:172] (0xc002cce210) Reply frame received for 3 I0123 23:43:15.626449 8 log.go:172] (0xc002cce210) (0xc0019fea00) Create stream I0123 23:43:15.626473 8 log.go:172] (0xc002cce210) (0xc0019fea00) Stream added, broadcasting: 5 I0123 23:43:15.628630 8 log.go:172] (0xc002cce210) Reply frame received for 5 I0123 23:43:15.709478 8 log.go:172] (0xc002cce210) Data frame received for 3 I0123 23:43:15.709976 8 log.go:172] (0xc0011db9a0) (3) Data frame handling I0123 23:43:15.710469 8 log.go:172] (0xc0011db9a0) (3) Data frame sent I0123 23:43:15.855215 8 log.go:172] (0xc002cce210) Data frame received for 1 I0123 23:43:15.855569 8 log.go:172] (0xc002cce210) (0xc0019fea00) Stream removed, broadcasting: 5 I0123 23:43:15.855664 8 log.go:172] (0xc0003e5540) (1) Data frame handling I0123 23:43:15.855715 8 log.go:172] (0xc0003e5540) (1) Data frame sent I0123 23:43:15.855841 8 log.go:172] (0xc002cce210) (0xc0011db9a0) Stream removed, broadcasting: 3 I0123 23:43:15.855948 8 log.go:172] (0xc002cce210) (0xc0003e5540) Stream removed, broadcasting: 1 I0123 23:43:15.856141 8 log.go:172] (0xc002cce210) (0xc0003e5540) Stream removed, broadcasting: 1 I0123 23:43:15.856154 8 log.go:172] (0xc002cce210) (0xc0011db9a0) Stream removed, broadcasting: 3 I0123 23:43:15.856163 8 log.go:172] (0xc002cce210) (0xc0019fea00) Stream removed, broadcasting: 5 I0123 23:43:15.857355 8 log.go:172] (0xc002cce210) Go away received Jan 23 23:43:15.857: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 23 23:43:15.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9454" for this suite. • [SLOW TEST:12.560 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":278,"completed":15,"skipped":190,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 23 23:43:15.868: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:279 [BeforeEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1898 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: running the image docker.io/library/httpd:2.4.38-alpine Jan 23 23:43:15.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-9067' Jan 23 23:43:16.064: INFO: stderr: "" Jan 23 23:43:16.064: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Jan 23 23:43:26.115: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-9067 -o json' Jan 23 23:43:26.246: INFO: stderr: "" Jan 23 23:43:26.246: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-01-23T23:43:16Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-9067\",\n \"resourceVersion\": \"3903470\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-9067/pods/e2e-test-httpd-pod\",\n \"uid\": \"4bda2dab-2b4d-479c-a5ee-b789e5ca357b\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-56xhn\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"jerma-node\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-56xhn\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-56xhn\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-23T23:43:16Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-23T23:43:21Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-23T23:43:21Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-23T23:43:16Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"docker://1eadac87fcb565eb55799fea7aebe19d922ce60a8520a47d7f5a1aefe0520774\",\n \"image\": \"httpd:2.4.38-alpine\",\n \"imageID\": \"docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-01-23T23:43:20Z\"\n }\n }\n }\n ],\n \"hostIP\": \"10.96.2.250\",\n \"phase\": \"Running\",\n \"podIP\": \"10.44.0.2\",\n \"podIPs\": [\n {\n \"ip\": \"10.44.0.2\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-01-23T23:43:16Z\"\n }\n}\n" STEP: replace the image in the pod Jan 23 23:43:26.247: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-9067' Jan 23 23:43:26.580: INFO: stderr: "" Jan 23 23:43:26.580: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1903 Jan 23 23:43:26.589: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-9067' Jan 23 23:43:31.497: INFO: stderr: "" Jan 23 23:43:31.497: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 23 23:43:31.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9067" for this suite. • [SLOW TEST:15.650 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1894 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":278,"completed":16,"skipped":211,"failed":0} SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 23 23:43:31.519: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:279 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating all guestbook components Jan 23 23:43:31.623: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-slave labels: app: agnhost role: slave tier: backend spec: ports: - port: 6379 selector: app: agnhost role: slave tier: backend Jan 23 23:43:31.623: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3480' Jan 23 23:43:32.063: INFO: stderr: "" Jan 23 23:43:32.063: INFO: stdout: "service/agnhost-slave created\n" Jan 23 23:43:32.064: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-master labels: app: agnhost role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: master tier: backend Jan 23 23:43:32.064: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3480' Jan 23 23:43:32.496: INFO: stderr: "" Jan 23 23:43:32.496: INFO: stdout: "service/agnhost-master created\n" Jan 23 23:43:32.497: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Jan 23 23:43:32.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3480' Jan 23 23:43:32.924: INFO: stderr: "" Jan 23 23:43:32.924: INFO: stdout: "service/frontend created\n" Jan 23 23:43:32.925: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Jan 23 23:43:32.926: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3480' Jan 23 23:43:33.187: INFO: stderr: "" Jan 23 23:43:33.187: INFO: stdout: "deployment.apps/frontend created\n" Jan 23 23:43:33.187: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-master spec: replicas: 1 selector: matchLabels: app: agnhost role: master tier: backend template: metadata: labels: app: agnhost role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Jan 23 23:43:33.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3480' Jan 23 23:43:33.494: INFO: stderr: "" Jan 23 23:43:33.494: INFO: stdout: "deployment.apps/agnhost-master created\n" Jan 23 23:43:33.494: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-slave spec: replicas: 2 selector: matchLabels: app: agnhost role: slave tier: backend template: metadata: labels: app: agnhost role: slave tier: backend spec: containers: - name: slave image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Jan 23 23:43:33.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3480' Jan 23 23:43:33.901: INFO: stderr: "" Jan 23 23:43:33.901: INFO: stdout: "deployment.apps/agnhost-slave created\n" STEP: validating guestbook app Jan 23 23:43:33.901: INFO: Waiting for all frontend pods to be Running. Jan 23 23:43:53.953: INFO: Waiting for frontend to serve content. Jan 23 23:43:54.037: INFO: Trying to add a new entry to the guestbook. Jan 23 23:43:54.139: INFO: Verifying that added entry can be retrieved. Jan 23 23:43:54.153: INFO: Failed to get response from guestbook. err: , response: {"data":""} STEP: using delete to clean up resources Jan 23 23:43:59.179: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3480' Jan 23 23:43:59.637: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 23 23:43:59.637: INFO: stdout: "service \"agnhost-slave\" force deleted\n" STEP: using delete to clean up resources Jan 23 23:43:59.637: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3480' Jan 23 23:43:59.889: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 23 23:43:59.889: INFO: stdout: "service \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Jan 23 23:43:59.890: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3480' Jan 23 23:44:00.039: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 23 23:44:00.039: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Jan 23 23:44:00.040: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3480' Jan 23 23:44:00.153: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 23 23:44:00.153: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Jan 23 23:44:00.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3480' Jan 23 23:44:00.283: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 23 23:44:00.283: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Jan 23 23:44:00.283: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3480' Jan 23 23:44:00.380: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 23 23:44:00.380: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 23 23:44:00.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3480" for this suite. • [SLOW TEST:28.896 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:387 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":278,"completed":17,"skipped":221,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 23 23:44:00.415: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 23 23:45:00.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7265" for this suite. • [SLOW TEST:60.367 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":278,"completed":18,"skipped":238,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 23 23:45:00.783: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Jan 23 23:45:01.096: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"e933831f-1423-4610-ab05-74ef06130306", Controller:(*bool)(0xc002fa5ad2), BlockOwnerDeletion:(*bool)(0xc002fa5ad3)}} Jan 23 23:45:01.119: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"7c6f98e9-6219-4c9e-ac00-73492f75b099", Controller:(*bool)(0xc002ecc77a), BlockOwnerDeletion:(*bool)(0xc002ecc77b)}} Jan 23 23:45:01.215: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"cd576947-920c-4604-a258-99a7d52849d8", Controller:(*bool)(0xc002ecc90a), BlockOwnerDeletion:(*bool)(0xc002ecc90b)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 23 23:45:06.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1242" for this suite. • [SLOW TEST:5.548 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":278,"completed":19,"skipped":252,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 23 23:45:06.332: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test emptydir 0644 on tmpfs Jan 23 23:45:06.952: INFO: Waiting up to 5m0s for pod "pod-8312c9f0-3bd9-4ffa-8ed3-c99c8bb46089" in namespace "emptydir-4933" to be "success or failure" Jan 23 23:45:06.973: INFO: Pod "pod-8312c9f0-3bd9-4ffa-8ed3-c99c8bb46089": Phase="Pending", Reason="", readiness=false. Elapsed: 20.13498ms Jan 23 23:45:08.980: INFO: Pod "pod-8312c9f0-3bd9-4ffa-8ed3-c99c8bb46089": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027181985s Jan 23 23:45:10.991: INFO: Pod "pod-8312c9f0-3bd9-4ffa-8ed3-c99c8bb46089": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03870856s Jan 23 23:45:13.041: INFO: Pod "pod-8312c9f0-3bd9-4ffa-8ed3-c99c8bb46089": Phase="Pending", Reason="", readiness=false. Elapsed: 6.088428397s Jan 23 23:45:15.048: INFO: Pod "pod-8312c9f0-3bd9-4ffa-8ed3-c99c8bb46089": Phase="Pending", Reason="", readiness=false. Elapsed: 8.095706529s Jan 23 23:45:17.055: INFO: Pod "pod-8312c9f0-3bd9-4ffa-8ed3-c99c8bb46089": Phase="Pending", Reason="", readiness=false. Elapsed: 10.103009652s Jan 23 23:45:19.063: INFO: Pod "pod-8312c9f0-3bd9-4ffa-8ed3-c99c8bb46089": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.110156126s STEP: Saw pod success Jan 23 23:45:19.063: INFO: Pod "pod-8312c9f0-3bd9-4ffa-8ed3-c99c8bb46089" satisfied condition "success or failure" Jan 23 23:45:19.067: INFO: Trying to get logs from node jerma-node pod pod-8312c9f0-3bd9-4ffa-8ed3-c99c8bb46089 container test-container: STEP: delete the pod Jan 23 23:45:19.143: INFO: Waiting for pod pod-8312c9f0-3bd9-4ffa-8ed3-c99c8bb46089 to disappear Jan 23 23:45:19.159: INFO: Pod pod-8312c9f0-3bd9-4ffa-8ed3-c99c8bb46089 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 23 23:45:19.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4933" for this suite. • [SLOW TEST:12.841 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":20,"skipped":298,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 23 23:45:19.174: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Jan 23 23:45:19.410: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1531b7aa-b95d-40f5-8f11-4f7f989cabab" in namespace "projected-6444" to be "success or failure" Jan 23 23:45:19.557: INFO: Pod "downwardapi-volume-1531b7aa-b95d-40f5-8f11-4f7f989cabab": Phase="Pending", Reason="", readiness=false. Elapsed: 146.898219ms Jan 23 23:45:21.567: INFO: Pod "downwardapi-volume-1531b7aa-b95d-40f5-8f11-4f7f989cabab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.156066978s Jan 23 23:45:23.573: INFO: Pod "downwardapi-volume-1531b7aa-b95d-40f5-8f11-4f7f989cabab": Phase="Pending", Reason="", readiness=false. Elapsed: 4.162505058s Jan 23 23:45:25.580: INFO: Pod "downwardapi-volume-1531b7aa-b95d-40f5-8f11-4f7f989cabab": Phase="Pending", Reason="", readiness=false. Elapsed: 6.169833357s Jan 23 23:45:27.587: INFO: Pod "downwardapi-volume-1531b7aa-b95d-40f5-8f11-4f7f989cabab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.176194759s STEP: Saw pod success Jan 23 23:45:27.587: INFO: Pod "downwardapi-volume-1531b7aa-b95d-40f5-8f11-4f7f989cabab" satisfied condition "success or failure" Jan 23 23:45:27.598: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-1531b7aa-b95d-40f5-8f11-4f7f989cabab container client-container: STEP: delete the pod Jan 23 23:45:27.647: INFO: Waiting for pod downwardapi-volume-1531b7aa-b95d-40f5-8f11-4f7f989cabab to disappear Jan 23 23:45:27.665: INFO: Pod downwardapi-volume-1531b7aa-b95d-40f5-8f11-4f7f989cabab no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 23 23:45:27.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6444" for this suite. • [SLOW TEST:8.501 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":21,"skipped":332,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 23 23:45:27.675: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Jan 23 23:45:27.868: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8049446d-cb89-4abb-bba5-1de1c9d78180" in namespace "projected-553" to be "success or failure" Jan 23 23:45:27.935: INFO: Pod "downwardapi-volume-8049446d-cb89-4abb-bba5-1de1c9d78180": Phase="Pending", Reason="", readiness=false. Elapsed: 66.837033ms Jan 23 23:45:29.943: INFO: Pod "downwardapi-volume-8049446d-cb89-4abb-bba5-1de1c9d78180": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07440479s Jan 23 23:45:31.948: INFO: Pod "downwardapi-volume-8049446d-cb89-4abb-bba5-1de1c9d78180": Phase="Pending", Reason="", readiness=false. Elapsed: 4.080110122s Jan 23 23:45:33.954: INFO: Pod "downwardapi-volume-8049446d-cb89-4abb-bba5-1de1c9d78180": Phase="Pending", Reason="", readiness=false. Elapsed: 6.085533685s Jan 23 23:45:35.965: INFO: Pod "downwardapi-volume-8049446d-cb89-4abb-bba5-1de1c9d78180": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.096465678s STEP: Saw pod success Jan 23 23:45:35.965: INFO: Pod "downwardapi-volume-8049446d-cb89-4abb-bba5-1de1c9d78180" satisfied condition "success or failure" Jan 23 23:45:35.970: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-8049446d-cb89-4abb-bba5-1de1c9d78180 container client-container: STEP: delete the pod Jan 23 23:45:36.025: INFO: Waiting for pod downwardapi-volume-8049446d-cb89-4abb-bba5-1de1c9d78180 to disappear Jan 23 23:45:36.089: INFO: Pod downwardapi-volume-8049446d-cb89-4abb-bba5-1de1c9d78180 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 23 23:45:36.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-553" for this suite. • [SLOW TEST:8.429 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":22,"skipped":351,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 23 23:45:36.105: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 23 23:45:37.419: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 23 23:45:39.718: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715419937, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715419937, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715419937, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715419937, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 23 23:45:41.728: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715419937, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715419937, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715419937, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715419937, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 23 23:45:43.723: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715419937, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715419937, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715419937, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715419937, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 23 23:45:46.784: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 23 23:45:46.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8019" for this suite. STEP: Destroying namespace "webhook-8019-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:11.036 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":278,"completed":23,"skipped":397,"failed":0} SSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 23 23:45:47.142: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Jan 23 23:45:47.211: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-233cb7e1-b4fc-4d5c-bbbc-153cc863f0a0" in namespace "security-context-test-8795" to be "success or failure" Jan 23 23:45:47.215: INFO: Pod "busybox-readonly-false-233cb7e1-b4fc-4d5c-bbbc-153cc863f0a0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.732053ms Jan 23 23:45:49.223: INFO: Pod "busybox-readonly-false-233cb7e1-b4fc-4d5c-bbbc-153cc863f0a0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012300082s Jan 23 23:45:51.231: INFO: Pod "busybox-readonly-false-233cb7e1-b4fc-4d5c-bbbc-153cc863f0a0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.020371891s Jan 23 23:45:53.236: INFO: Pod "busybox-readonly-false-233cb7e1-b4fc-4d5c-bbbc-153cc863f0a0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.025754022s Jan 23 23:45:55.242: INFO: Pod "busybox-readonly-false-233cb7e1-b4fc-4d5c-bbbc-153cc863f0a0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.031549405s Jan 23 23:45:57.248: INFO: Pod "busybox-readonly-false-233cb7e1-b4fc-4d5c-bbbc-153cc863f0a0": Phase="Pending", Reason="", readiness=false. Elapsed: 10.037043628s Jan 23 23:45:59.253: INFO: Pod "busybox-readonly-false-233cb7e1-b4fc-4d5c-bbbc-153cc863f0a0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.042252501s Jan 23 23:45:59.253: INFO: Pod "busybox-readonly-false-233cb7e1-b4fc-4d5c-bbbc-153cc863f0a0" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 23 23:45:59.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-8795" for this suite. • [SLOW TEST:12.124 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 When creating a pod with readOnlyRootFilesystem /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:164 should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":278,"completed":24,"skipped":403,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 23 23:45:59.267: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Jan 23 23:45:59.472: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-7639 /api/v1/namespaces/watch-7639/configmaps/e2e-watch-test-resource-version d879c439-7fae-4f2b-936f-3784b2512788 3904247 0 2020-01-23 23:45:59 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jan 23 23:45:59.472: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-7639 /api/v1/namespaces/watch-7639/configmaps/e2e-watch-test-resource-version d879c439-7fae-4f2b-936f-3784b2512788 3904248 0 2020-01-23 23:45:59 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 23 23:45:59.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7639" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":278,"completed":25,"skipped":446,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 23 23:45:59.484: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test emptydir 0666 on node default medium Jan 23 23:45:59.568: INFO: Waiting up to 5m0s for pod "pod-cdae2e23-0009-4798-8b4f-429d640f5b03" in namespace "emptydir-3721" to be "success or failure" Jan 23 23:45:59.614: INFO: Pod "pod-cdae2e23-0009-4798-8b4f-429d640f5b03": Phase="Pending", Reason="", readiness=false. Elapsed: 46.338291ms Jan 23 23:46:01.618: INFO: Pod "pod-cdae2e23-0009-4798-8b4f-429d640f5b03": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050355999s Jan 23 23:46:03.625: INFO: Pod "pod-cdae2e23-0009-4798-8b4f-429d640f5b03": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057144526s Jan 23 23:46:05.631: INFO: Pod "pod-cdae2e23-0009-4798-8b4f-429d640f5b03": Phase="Pending", Reason="", readiness=false. Elapsed: 6.063737231s Jan 23 23:46:07.640: INFO: Pod "pod-cdae2e23-0009-4798-8b4f-429d640f5b03": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.072493387s STEP: Saw pod success Jan 23 23:46:07.640: INFO: Pod "pod-cdae2e23-0009-4798-8b4f-429d640f5b03" satisfied condition "success or failure" Jan 23 23:46:07.650: INFO: Trying to get logs from node jerma-node pod pod-cdae2e23-0009-4798-8b4f-429d640f5b03 container test-container: STEP: delete the pod Jan 23 23:46:08.002: INFO: Waiting for pod pod-cdae2e23-0009-4798-8b4f-429d640f5b03 to disappear Jan 23 23:46:08.013: INFO: Pod pod-cdae2e23-0009-4798-8b4f-429d640f5b03 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 23 23:46:08.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3721" for this suite. • [SLOW TEST:8.540 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":26,"skipped":460,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 23 23:46:08.024: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test hostPath mode Jan 23 23:46:08.085: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-7438" to be "success or failure" Jan 23 23:46:08.170: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 84.617035ms Jan 23 23:46:10.178: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.092459857s Jan 23 23:46:12.184: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.098309793s Jan 23 23:46:14.191: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.105872372s Jan 23 23:46:16.198: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.113214348s Jan 23 23:46:18.207: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.121632593s Jan 23 23:46:20.214: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.128744565s STEP: Saw pod success Jan 23 23:46:20.214: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Jan 23 23:46:20.219: INFO: Trying to get logs from node jerma-node pod pod-host-path-test container test-container-1: STEP: delete the pod Jan 23 23:46:20.423: INFO: Waiting for pod pod-host-path-test to disappear Jan 23 23:46:20.434: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 23 23:46:20.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-7438" for this suite. • [SLOW TEST:12.424 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":27,"skipped":470,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 23 23:46:20.449: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 23 23:46:21.625: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 23 23:46:23.644: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715419981, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715419981, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715419981, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715419981, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 23 23:46:25.651: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715419981, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715419981, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715419981, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715419981, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 23 23:46:27.653: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715419981, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715419981, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715419981, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715419981, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 23 23:46:30.680: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 23 23:46:40.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4645" for this suite. STEP: Destroying namespace "webhook-4645-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:20.880 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":278,"completed":28,"skipped":493,"failed":0} SSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 23 23:46:41.330: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:279 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1862 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: running the image docker.io/library/httpd:2.4.38-alpine Jan 23 23:46:41.482: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-9901' Jan 23 23:46:41.658: INFO: stderr: "" Jan 23 23:46:41.658: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1867 Jan 23 23:46:41.676: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-9901' Jan 23 23:46:47.981: INFO: stderr: "" Jan 23 23:46:47.981: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 23 23:46:47.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9901" for this suite. • [SLOW TEST:6.707 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1858 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":278,"completed":29,"skipped":501,"failed":0} SSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 23 23:46:48.037: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Jan 23 23:46:48.125: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e9cbb942-832c-4ab5-b19b-6df9848b03d9" in namespace "downward-api-9894" to be "success or failure" Jan 23 23:46:48.207: INFO: Pod "downwardapi-volume-e9cbb942-832c-4ab5-b19b-6df9848b03d9": Phase="Pending", Reason="", readiness=false. Elapsed: 81.33357ms Jan 23 23:46:50.213: INFO: Pod "downwardapi-volume-e9cbb942-832c-4ab5-b19b-6df9848b03d9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.088045359s Jan 23 23:46:52.220: INFO: Pod "downwardapi-volume-e9cbb942-832c-4ab5-b19b-6df9848b03d9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.094824344s Jan 23 23:46:54.227: INFO: Pod "downwardapi-volume-e9cbb942-832c-4ab5-b19b-6df9848b03d9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.101609294s Jan 23 23:46:56.234: INFO: Pod "downwardapi-volume-e9cbb942-832c-4ab5-b19b-6df9848b03d9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.108566945s STEP: Saw pod success Jan 23 23:46:56.234: INFO: Pod "downwardapi-volume-e9cbb942-832c-4ab5-b19b-6df9848b03d9" satisfied condition "success or failure" Jan 23 23:46:56.238: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-e9cbb942-832c-4ab5-b19b-6df9848b03d9 container client-container: STEP: delete the pod Jan 23 23:46:56.274: INFO: Waiting for pod downwardapi-volume-e9cbb942-832c-4ab5-b19b-6df9848b03d9 to disappear Jan 23 23:46:56.340: INFO: Pod downwardapi-volume-e9cbb942-832c-4ab5-b19b-6df9848b03d9 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 23 23:46:56.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9894" for this suite. • [SLOW TEST:8.316 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":278,"completed":30,"skipped":504,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 23 23:46:56.354: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Jan 23 23:47:02.539: INFO: 0 pods remaining Jan 23 23:47:02.539: INFO: 0 pods has nil DeletionTimestamp Jan 23 23:47:02.539: INFO: STEP: Gathering metrics W0123 23:47:03.576611 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 23 23:47:03.576: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 23 23:47:03.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3662" for this suite. • [SLOW TEST:7.235 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":278,"completed":31,"skipped":509,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 23 23:47:03.589: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 23 23:47:05.188: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 23 23:47:08.567: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715420025, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715420025, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715420025, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715420024, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 23 23:47:10.718: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715420025, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715420025, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715420025, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715420024, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 23 23:47:12.597: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715420025, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715420025, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715420025, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715420024, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 23 23:47:14.578: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715420025, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715420025, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715420025, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715420024, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 23 23:47:16.607: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715420025, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715420025, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715420025, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715420024, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 23 23:47:18.577: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715420025, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715420025, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715420025, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715420024, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 23 23:47:20.577: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715420025, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715420025, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715420025, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715420024, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 23 23:47:22.585: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715420025, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715420025, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715420025, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715420024, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 23 23:47:25.660: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 23 23:47:25.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7805" for this suite. STEP: Destroying namespace "webhook-7805-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:22.244 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":278,"completed":32,"skipped":520,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 23 23:47:25.834: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating secret secrets-1684/secret-test-2a6ee9d6-3c7c-4120-a1c0-8aa3bb6c8a82 STEP: Creating a pod to test consume secrets Jan 23 23:47:25.965: INFO: Waiting up to 5m0s for pod "pod-configmaps-7893536a-2258-43c9-812d-c1c6302e6faf" in namespace "secrets-1684" to be "success or failure" Jan 23 23:47:25.981: INFO: Pod "pod-configmaps-7893536a-2258-43c9-812d-c1c6302e6faf": Phase="Pending", Reason="", readiness=false. Elapsed: 16.240329ms Jan 23 23:47:27.988: INFO: Pod "pod-configmaps-7893536a-2258-43c9-812d-c1c6302e6faf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022803447s Jan 23 23:47:30.005: INFO: Pod "pod-configmaps-7893536a-2258-43c9-812d-c1c6302e6faf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039874042s Jan 23 23:47:32.010: INFO: Pod "pod-configmaps-7893536a-2258-43c9-812d-c1c6302e6faf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.045015238s Jan 23 23:47:34.025: INFO: Pod "pod-configmaps-7893536a-2258-43c9-812d-c1c6302e6faf": Phase="Pending", Reason="", readiness=false. Elapsed: 8.059981808s Jan 23 23:47:36.031: INFO: Pod "pod-configmaps-7893536a-2258-43c9-812d-c1c6302e6faf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.065747125s STEP: Saw pod success Jan 23 23:47:36.031: INFO: Pod "pod-configmaps-7893536a-2258-43c9-812d-c1c6302e6faf" satisfied condition "success or failure" Jan 23 23:47:36.034: INFO: Trying to get logs from node jerma-node pod pod-configmaps-7893536a-2258-43c9-812d-c1c6302e6faf container env-test: STEP: delete the pod Jan 23 23:47:36.064: INFO: Waiting for pod pod-configmaps-7893536a-2258-43c9-812d-c1c6302e6faf to disappear Jan 23 23:47:36.068: INFO: Pod pod-configmaps-7893536a-2258-43c9-812d-c1c6302e6faf no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 23 23:47:36.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1684" for this suite. • [SLOW TEST:10.245 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":33,"skipped":530,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 23 23:47:36.079: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 23 23:47:36.979: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 23 23:47:38.997: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715420056, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715420056, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715420057, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715420056, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 23 23:47:41.004: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715420056, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715420056, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715420057, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715420056, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 23 23:47:43.320: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715420056, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715420056, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715420057, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715420056, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 23 23:47:46.042: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 23 23:47:46.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5931" for this suite. STEP: Destroying namespace "webhook-5931-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:10.654 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":278,"completed":34,"skipped":532,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 23 23:47:46.733: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Jan 23 23:47:46.899: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Jan 23 23:47:46.934: INFO: Number of nodes with available pods: 0 Jan 23 23:47:46.934: INFO: Node jerma-node is running more than one daemon pod Jan 23 23:47:48.556: INFO: Number of nodes with available pods: 0 Jan 23 23:47:48.556: INFO: Node jerma-node is running more than one daemon pod Jan 23 23:47:49.078: INFO: Number of nodes with available pods: 0 Jan 23 23:47:49.079: INFO: Node jerma-node is running more than one daemon pod Jan 23 23:47:50.066: INFO: Number of nodes with available pods: 0 Jan 23 23:47:50.066: INFO: Node jerma-node is running more than one daemon pod Jan 23 23:47:50.951: INFO: Number of nodes with available pods: 0 Jan 23 23:47:50.951: INFO: Node jerma-node is running more than one daemon pod Jan 23 23:47:52.878: INFO: Number of nodes with available pods: 0 Jan 23 23:47:52.878: INFO: Node jerma-node is running more than one daemon pod Jan 23 23:47:53.921: INFO: Number of nodes with available pods: 0 Jan 23 23:47:53.921: INFO: Node jerma-node is running more than one daemon pod Jan 23 23:47:54.081: INFO: Number of nodes with available pods: 0 Jan 23 23:47:54.081: INFO: Node jerma-node is running more than one daemon pod Jan 23 23:47:55.374: INFO: Number of nodes with available pods: 0 Jan 23 23:47:55.375: INFO: Node jerma-node is running more than one daemon pod Jan 23 23:47:56.011: INFO: Number of nodes with available pods: 0 Jan 23 23:47:56.011: INFO: Node jerma-node is running more than one daemon pod Jan 23 23:47:56.948: INFO: Number of nodes with available pods: 0 Jan 23 23:47:56.948: INFO: Node jerma-node is running more than one daemon pod Jan 23 23:47:57.948: INFO: Number of nodes with available pods: 1 Jan 23 23:47:57.948: INFO: Node jerma-node is running more than one daemon pod Jan 23 23:47:58.945: INFO: Number of nodes with available pods: 2 Jan 23 23:47:58.945: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Jan 23 23:47:59.024: INFO: Wrong image for pod: daemon-set-5r2kq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 23 23:47:59.024: INFO: Wrong image for pod: daemon-set-9fts6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 23 23:48:00.042: INFO: Wrong image for pod: daemon-set-5r2kq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 23 23:48:00.042: INFO: Wrong image for pod: daemon-set-9fts6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 23 23:48:01.178: INFO: Wrong image for pod: daemon-set-5r2kq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 23 23:48:01.178: INFO: Wrong image for pod: daemon-set-9fts6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 23 23:48:02.043: INFO: Wrong image for pod: daemon-set-5r2kq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 23 23:48:02.043: INFO: Wrong image for pod: daemon-set-9fts6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 23 23:48:03.068: INFO: Wrong image for pod: daemon-set-5r2kq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 23 23:48:03.068: INFO: Wrong image for pod: daemon-set-9fts6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 23 23:48:04.044: INFO: Wrong image for pod: daemon-set-5r2kq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 23 23:48:04.044: INFO: Pod daemon-set-5r2kq is not available Jan 23 23:48:04.044: INFO: Wrong image for pod: daemon-set-9fts6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 23 23:48:05.050: INFO: Wrong image for pod: daemon-set-5r2kq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 23 23:48:05.050: INFO: Pod daemon-set-5r2kq is not available Jan 23 23:48:05.050: INFO: Wrong image for pod: daemon-set-9fts6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 23 23:48:06.045: INFO: Wrong image for pod: daemon-set-5r2kq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 23 23:48:06.045: INFO: Pod daemon-set-5r2kq is not available Jan 23 23:48:06.045: INFO: Wrong image for pod: daemon-set-9fts6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 23 23:48:07.043: INFO: Wrong image for pod: daemon-set-5r2kq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 23 23:48:07.043: INFO: Pod daemon-set-5r2kq is not available Jan 23 23:48:07.043: INFO: Wrong image for pod: daemon-set-9fts6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 23 23:48:08.042: INFO: Wrong image for pod: daemon-set-5r2kq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 23 23:48:08.042: INFO: Pod daemon-set-5r2kq is not available Jan 23 23:48:08.042: INFO: Wrong image for pod: daemon-set-9fts6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 23 23:48:09.042: INFO: Wrong image for pod: daemon-set-5r2kq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 23 23:48:09.042: INFO: Pod daemon-set-5r2kq is not available Jan 23 23:48:09.042: INFO: Wrong image for pod: daemon-set-9fts6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 23 23:48:10.044: INFO: Wrong image for pod: daemon-set-5r2kq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 23 23:48:10.044: INFO: Pod daemon-set-5r2kq is not available Jan 23 23:48:10.044: INFO: Wrong image for pod: daemon-set-9fts6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 23 23:48:11.043: INFO: Wrong image for pod: daemon-set-5r2kq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 23 23:48:11.043: INFO: Pod daemon-set-5r2kq is not available Jan 23 23:48:11.043: INFO: Wrong image for pod: daemon-set-9fts6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 23 23:48:12.042: INFO: Wrong image for pod: daemon-set-5r2kq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 23 23:48:12.042: INFO: Pod daemon-set-5r2kq is not available Jan 23 23:48:12.042: INFO: Wrong image for pod: daemon-set-9fts6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 23 23:48:13.042: INFO: Wrong image for pod: daemon-set-5r2kq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 23 23:48:13.042: INFO: Pod daemon-set-5r2kq is not available Jan 23 23:48:13.042: INFO: Wrong image for pod: daemon-set-9fts6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 23 23:48:14.044: INFO: Wrong image for pod: daemon-set-9fts6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 23 23:48:14.044: INFO: Pod daemon-set-t5b2h is not available Jan 23 23:48:15.043: INFO: Wrong image for pod: daemon-set-9fts6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 23 23:48:15.043: INFO: Pod daemon-set-t5b2h is not available Jan 23 23:48:16.171: INFO: Wrong image for pod: daemon-set-9fts6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 23 23:48:16.171: INFO: Pod daemon-set-t5b2h is not available Jan 23 23:48:17.041: INFO: Wrong image for pod: daemon-set-9fts6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 23 23:48:17.041: INFO: Pod daemon-set-t5b2h is not available Jan 23 23:48:18.748: INFO: Wrong image for pod: daemon-set-9fts6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 23 23:48:18.748: INFO: Pod daemon-set-t5b2h is not available Jan 23 23:48:19.043: INFO: Wrong image for pod: daemon-set-9fts6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 23 23:48:19.043: INFO: Pod daemon-set-t5b2h is not available Jan 23 23:48:20.041: INFO: Wrong image for pod: daemon-set-9fts6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 23 23:48:20.041: INFO: Pod daemon-set-t5b2h is not available Jan 23 23:48:21.057: INFO: Wrong image for pod: daemon-set-9fts6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 23 23:48:22.044: INFO: Wrong image for pod: daemon-set-9fts6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 23 23:48:23.041: INFO: Wrong image for pod: daemon-set-9fts6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 23 23:48:24.043: INFO: Wrong image for pod: daemon-set-9fts6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 23 23:48:25.042: INFO: Wrong image for pod: daemon-set-9fts6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 23 23:48:25.042: INFO: Pod daemon-set-9fts6 is not available Jan 23 23:48:26.045: INFO: Wrong image for pod: daemon-set-9fts6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 23 23:48:26.045: INFO: Pod daemon-set-9fts6 is not available Jan 23 23:48:27.042: INFO: Wrong image for pod: daemon-set-9fts6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 23 23:48:27.042: INFO: Pod daemon-set-9fts6 is not available Jan 23 23:48:28.043: INFO: Wrong image for pod: daemon-set-9fts6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 23 23:48:28.043: INFO: Pod daemon-set-9fts6 is not available Jan 23 23:48:29.041: INFO: Wrong image for pod: daemon-set-9fts6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 23 23:48:29.041: INFO: Pod daemon-set-9fts6 is not available Jan 23 23:48:30.042: INFO: Wrong image for pod: daemon-set-9fts6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 23 23:48:30.042: INFO: Pod daemon-set-9fts6 is not available Jan 23 23:48:31.040: INFO: Wrong image for pod: daemon-set-9fts6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 23 23:48:31.040: INFO: Pod daemon-set-9fts6 is not available Jan 23 23:48:32.042: INFO: Wrong image for pod: daemon-set-9fts6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 23 23:48:32.042: INFO: Pod daemon-set-9fts6 is not available Jan 23 23:48:33.063: INFO: Pod daemon-set-wnck7 is not available STEP: Check that daemon pods are still running on every node of the cluster. Jan 23 23:48:33.091: INFO: Number of nodes with available pods: 1 Jan 23 23:48:33.091: INFO: Node jerma-node is running more than one daemon pod Jan 23 23:48:34.100: INFO: Number of nodes with available pods: 1 Jan 23 23:48:34.100: INFO: Node jerma-node is running more than one daemon pod Jan 23 23:48:35.123: INFO: Number of nodes with available pods: 1 Jan 23 23:48:35.123: INFO: Node jerma-node is running more than one daemon pod Jan 23 23:48:36.154: INFO: Number of nodes with available pods: 1 Jan 23 23:48:36.154: INFO: Node jerma-node is running more than one daemon pod Jan 23 23:48:37.102: INFO: Number of nodes with available pods: 1 Jan 23 23:48:37.102: INFO: Node jerma-node is running more than one daemon pod Jan 23 23:48:38.102: INFO: Number of nodes with available pods: 1 Jan 23 23:48:38.102: INFO: Node jerma-node is running more than one daemon pod Jan 23 23:48:39.110: INFO: Number of nodes with available pods: 2 Jan 23 23:48:39.110: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2678, will wait for the garbage collector to delete the pods Jan 23 23:48:39.202: INFO: Deleting DaemonSet.extensions daemon-set took: 13.182921ms Jan 23 23:48:39.602: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.427351ms Jan 23 23:48:52.438: INFO: Number of nodes with available pods: 0 Jan 23 23:48:52.438: INFO: Number of running nodes: 0, number of available pods: 0 Jan 23 23:48:52.441: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2678/daemonsets","resourceVersion":"3905165"},"items":null} Jan 23 23:48:52.444: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2678/pods","resourceVersion":"3905165"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 23 23:48:52.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-2678" for this suite. • [SLOW TEST:65.732 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":278,"completed":35,"skipped":550,"failed":0} [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 23 23:48:52.466: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:279 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Jan 23 23:48:52.588: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8382' Jan 23 23:48:52.857: INFO: stderr: "" Jan 23 23:48:52.857: INFO: stdout: "replicationcontroller/agnhost-master created\n" Jan 23 23:48:52.857: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8382' Jan 23 23:48:53.160: INFO: stderr: "" Jan 23 23:48:53.160: INFO: stdout: "service/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Jan 23 23:48:54.167: INFO: Selector matched 1 pods for map[app:agnhost] Jan 23 23:48:54.168: INFO: Found 0 / 1 Jan 23 23:48:55.168: INFO: Selector matched 1 pods for map[app:agnhost] Jan 23 23:48:55.168: INFO: Found 0 / 1 Jan 23 23:48:56.176: INFO: Selector matched 1 pods for map[app:agnhost] Jan 23 23:48:56.176: INFO: Found 0 / 1 Jan 23 23:48:57.184: INFO: Selector matched 1 pods for map[app:agnhost] Jan 23 23:48:57.184: INFO: Found 0 / 1 Jan 23 23:48:58.204: INFO: Selector matched 1 pods for map[app:agnhost] Jan 23 23:48:58.205: INFO: Found 0 / 1 Jan 23 23:48:59.167: INFO: Selector matched 1 pods for map[app:agnhost] Jan 23 23:48:59.167: INFO: Found 0 / 1 Jan 23 23:49:00.168: INFO: Selector matched 1 pods for map[app:agnhost] Jan 23 23:49:00.168: INFO: Found 0 / 1 Jan 23 23:49:01.169: INFO: Selector matched 1 pods for map[app:agnhost] Jan 23 23:49:01.169: INFO: Found 0 / 1 Jan 23 23:49:02.171: INFO: Selector matched 1 pods for map[app:agnhost] Jan 23 23:49:02.172: INFO: Found 1 / 1 Jan 23 23:49:02.172: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jan 23 23:49:02.176: INFO: Selector matched 1 pods for map[app:agnhost] Jan 23 23:49:02.176: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jan 23 23:49:02.177: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod agnhost-master-b56cc --namespace=kubectl-8382' Jan 23 23:49:02.337: INFO: stderr: "" Jan 23 23:49:02.337: INFO: stdout: "Name: agnhost-master-b56cc\nNamespace: kubectl-8382\nPriority: 0\nNode: jerma-node/10.96.2.250\nStart Time: Thu, 23 Jan 2020 23:48:52 +0000\nLabels: app=agnhost\n role=master\nAnnotations: \nStatus: Running\nIP: 10.44.0.1\nIPs:\n IP: 10.44.0.1\nControlled By: ReplicationController/agnhost-master\nContainers:\n agnhost-master:\n Container ID: docker://eed40928c5c5ddd2cb5daea90ef357a98f2f4ab028222674581f0204cca46f3f\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Image ID: docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Thu, 23 Jan 2020 23:49:00 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-pnps6 (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-pnps6:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-pnps6\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled default-scheduler Successfully assigned kubectl-8382/agnhost-master-b56cc to jerma-node\n Normal Pulled 5s kubelet, jerma-node Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\n Normal Created 3s kubelet, jerma-node Created container agnhost-master\n Normal Started 2s kubelet, jerma-node Started container agnhost-master\n" Jan 23 23:49:02.337: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-8382' Jan 23 23:49:02.448: INFO: stderr: "" Jan 23 23:49:02.448: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-8382\nSelector: app=agnhost,role=master\nLabels: app=agnhost\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=master\n Containers:\n agnhost-master:\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 10s replication-controller Created pod: agnhost-master-b56cc\n" Jan 23 23:49:02.448: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-8382' Jan 23 23:49:02.553: INFO: stderr: "" Jan 23 23:49:02.553: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-8382\nLabels: app=agnhost\n role=master\nAnnotations: \nSelector: app=agnhost,role=master\nType: ClusterIP\nIP: 10.96.134.62\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.44.0.1:6379\nSession Affinity: None\nEvents: \n" Jan 23 23:49:02.557: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node jerma-node' Jan 23 23:49:02.689: INFO: stderr: "" Jan 23 23:49:02.689: INFO: stdout: "Name: jerma-node\nRoles: \nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=jerma-node\n kubernetes.io/os=linux\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sat, 04 Jan 2020 11:59:52 +0000\nTaints: \nUnschedulable: false\nLease:\n HolderIdentity: jerma-node\n AcquireTime: \n RenewTime: Thu, 23 Jan 2020 23:48:53 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n NetworkUnavailable False Sat, 04 Jan 2020 12:00:49 +0000 Sat, 04 Jan 2020 12:00:49 +0000 WeaveIsUp Weave pod has set this\n MemoryPressure False Thu, 23 Jan 2020 23:48:56 +0000 Sat, 04 Jan 2020 11:59:52 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Thu, 23 Jan 2020 23:48:56 +0000 Sat, 04 Jan 2020 11:59:52 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Thu, 23 Jan 2020 23:48:56 +0000 Sat, 04 Jan 2020 11:59:52 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Thu, 23 Jan 2020 23:48:56 +0000 Sat, 04 Jan 2020 12:00:52 +0000 KubeletReady kubelet is posting ready status. AppArmor enabled\nAddresses:\n InternalIP: 10.96.2.250\n Hostname: jerma-node\nCapacity:\n cpu: 4\n ephemeral-storage: 20145724Ki\n hugepages-2Mi: 0\n memory: 4039076Ki\n pods: 110\nAllocatable:\n cpu: 4\n ephemeral-storage: 18566299208\n hugepages-2Mi: 0\n memory: 3936676Ki\n pods: 110\nSystem Info:\n Machine ID: bdc16344252549dd902c3a5d68b22f41\n System UUID: BDC16344-2525-49DD-902C-3A5D68B22F41\n Boot ID: eec61fc4-8bf6-487f-8f93-ea9731fe757a\n Kernel Version: 4.15.0-52-generic\n OS Image: Ubuntu 18.04.2 LTS\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: docker://18.9.7\n Kubelet Version: v1.17.0\n Kube-Proxy Version: v1.17.0\nNon-terminated Pods: (3 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system kube-proxy-dsf66 0 (0%) 0 (0%) 0 (0%) 0 (0%) 19d\n kube-system weave-net-kz8lv 20m (0%) 0 (0%) 0 (0%) 0 (0%) 19d\n kubectl-8382 agnhost-master-b56cc 0 (0%) 0 (0%) 0 (0%) 0 (0%) 10s\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 20m (0%) 0 (0%)\n memory 0 (0%) 0 (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Jan 23 23:49:02.690: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-8382' Jan 23 23:49:02.833: INFO: stderr: "" Jan 23 23:49:02.833: INFO: stdout: "Name: kubectl-8382\nLabels: e2e-framework=kubectl\n e2e-run=5f7def5b-5066-4eb6-93e4-ded65b2168a6\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 23 23:49:02.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8382" for this suite. • [SLOW TEST:10.379 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1155 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":278,"completed":36,"skipped":550,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 23 23:49:02.846: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating secret with name secret-test-map-38990895-d156-41fe-9684-539984e8a3d8 STEP: Creating a pod to test consume secrets Jan 23 23:49:02.978: INFO: Waiting up to 5m0s for pod "pod-secrets-7d5ff96b-2814-43dd-9f12-47e6c3971749" in namespace "secrets-2184" to be "success or failure" Jan 23 23:49:03.004: INFO: Pod "pod-secrets-7d5ff96b-2814-43dd-9f12-47e6c3971749": Phase="Pending", Reason="", readiness=false. Elapsed: 25.285549ms Jan 23 23:49:05.012: INFO: Pod "pod-secrets-7d5ff96b-2814-43dd-9f12-47e6c3971749": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033162346s Jan 23 23:49:07.017: INFO: Pod "pod-secrets-7d5ff96b-2814-43dd-9f12-47e6c3971749": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037942626s Jan 23 23:49:09.024: INFO: Pod "pod-secrets-7d5ff96b-2814-43dd-9f12-47e6c3971749": Phase="Pending", Reason="", readiness=false. Elapsed: 6.045198532s Jan 23 23:49:11.032: INFO: Pod "pod-secrets-7d5ff96b-2814-43dd-9f12-47e6c3971749": Phase="Pending", Reason="", readiness=false. Elapsed: 8.053011043s Jan 23 23:49:13.037: INFO: Pod "pod-secrets-7d5ff96b-2814-43dd-9f12-47e6c3971749": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.05866157s STEP: Saw pod success Jan 23 23:49:13.037: INFO: Pod "pod-secrets-7d5ff96b-2814-43dd-9f12-47e6c3971749" satisfied condition "success or failure" Jan 23 23:49:13.041: INFO: Trying to get logs from node jerma-node pod pod-secrets-7d5ff96b-2814-43dd-9f12-47e6c3971749 container secret-volume-test: STEP: delete the pod Jan 23 23:49:13.285: INFO: Waiting for pod pod-secrets-7d5ff96b-2814-43dd-9f12-47e6c3971749 to disappear Jan 23 23:49:13.298: INFO: Pod pod-secrets-7d5ff96b-2814-43dd-9f12-47e6c3971749 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 23 23:49:13.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2184" for this suite. • [SLOW TEST:10.466 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":37,"skipped":581,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 23 23:49:13.313: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8503.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-8503.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8503.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-8503.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8503.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-8503.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8503.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-8503.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8503.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-8503.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8503.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-8503.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8503.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 189.221.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.221.189_udp@PTR;check="$$(dig +tcp +noall +answer +search 189.221.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.221.189_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8503.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-8503.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8503.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-8503.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8503.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-8503.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8503.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-8503.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8503.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-8503.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8503.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-8503.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8503.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 189.221.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.221.189_udp@PTR;check="$$(dig +tcp +noall +answer +search 189.221.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.221.189_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 23 23:49:25.619: INFO: Unable to read wheezy_udp@dns-test-service.dns-8503.svc.cluster.local from pod dns-8503/dns-test-03f61e43-6880-4597-9dea-1396c79e4b28: the server could not find the requested resource (get pods dns-test-03f61e43-6880-4597-9dea-1396c79e4b28) Jan 23 23:49:25.623: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8503.svc.cluster.local from pod dns-8503/dns-test-03f61e43-6880-4597-9dea-1396c79e4b28: the server could not find the requested resource (get pods dns-test-03f61e43-6880-4597-9dea-1396c79e4b28) Jan 23 23:49:25.627: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8503.svc.cluster.local from pod dns-8503/dns-test-03f61e43-6880-4597-9dea-1396c79e4b28: the server could not find the requested resource (get pods dns-test-03f61e43-6880-4597-9dea-1396c79e4b28) Jan 23 23:49:25.631: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8503.svc.cluster.local from pod dns-8503/dns-test-03f61e43-6880-4597-9dea-1396c79e4b28: the server could not find the requested resource (get pods dns-test-03f61e43-6880-4597-9dea-1396c79e4b28) Jan 23 23:49:25.664: INFO: Unable to read jessie_udp@dns-test-service.dns-8503.svc.cluster.local from pod dns-8503/dns-test-03f61e43-6880-4597-9dea-1396c79e4b28: the server could not find the requested resource (get pods dns-test-03f61e43-6880-4597-9dea-1396c79e4b28) Jan 23 23:49:25.669: INFO: Unable to read jessie_tcp@dns-test-service.dns-8503.svc.cluster.local from pod dns-8503/dns-test-03f61e43-6880-4597-9dea-1396c79e4b28: the server could not find the requested resource (get pods dns-test-03f61e43-6880-4597-9dea-1396c79e4b28) Jan 23 23:49:25.673: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8503.svc.cluster.local from pod dns-8503/dns-test-03f61e43-6880-4597-9dea-1396c79e4b28: the server could not find the requested resource (get pods dns-test-03f61e43-6880-4597-9dea-1396c79e4b28) Jan 23 23:49:25.676: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8503.svc.cluster.local from pod dns-8503/dns-test-03f61e43-6880-4597-9dea-1396c79e4b28: the server could not find the requested resource (get pods dns-test-03f61e43-6880-4597-9dea-1396c79e4b28) Jan 23 23:49:25.695: INFO: Lookups using dns-8503/dns-test-03f61e43-6880-4597-9dea-1396c79e4b28 failed for: [wheezy_udp@dns-test-service.dns-8503.svc.cluster.local wheezy_tcp@dns-test-service.dns-8503.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8503.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8503.svc.cluster.local jessie_udp@dns-test-service.dns-8503.svc.cluster.local jessie_tcp@dns-test-service.dns-8503.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8503.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8503.svc.cluster.local] Jan 23 23:49:30.703: INFO: Unable to read wheezy_udp@dns-test-service.dns-8503.svc.cluster.local from pod dns-8503/dns-test-03f61e43-6880-4597-9dea-1396c79e4b28: the server could not find the requested resource (get pods dns-test-03f61e43-6880-4597-9dea-1396c79e4b28) Jan 23 23:49:30.709: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8503.svc.cluster.local from pod dns-8503/dns-test-03f61e43-6880-4597-9dea-1396c79e4b28: the server could not find the requested resource (get pods dns-test-03f61e43-6880-4597-9dea-1396c79e4b28) Jan 23 23:49:30.713: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8503.svc.cluster.local from pod dns-8503/dns-test-03f61e43-6880-4597-9dea-1396c79e4b28: the server could not find the requested resource (get pods dns-test-03f61e43-6880-4597-9dea-1396c79e4b28) Jan 23 23:49:30.719: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8503.svc.cluster.local from pod dns-8503/dns-test-03f61e43-6880-4597-9dea-1396c79e4b28: the server could not find the requested resource (get pods dns-test-03f61e43-6880-4597-9dea-1396c79e4b28) Jan 23 23:49:30.750: INFO: Unable to read jessie_udp@dns-test-service.dns-8503.svc.cluster.local from pod dns-8503/dns-test-03f61e43-6880-4597-9dea-1396c79e4b28: the server could not find the requested resource (get pods dns-test-03f61e43-6880-4597-9dea-1396c79e4b28) Jan 23 23:49:30.754: INFO: Unable to read jessie_tcp@dns-test-service.dns-8503.svc.cluster.local from pod dns-8503/dns-test-03f61e43-6880-4597-9dea-1396c79e4b28: the server could not find the requested resource (get pods dns-test-03f61e43-6880-4597-9dea-1396c79e4b28) Jan 23 23:49:30.758: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8503.svc.cluster.local from pod dns-8503/dns-test-03f61e43-6880-4597-9dea-1396c79e4b28: the server could not find the requested resource (get pods dns-test-03f61e43-6880-4597-9dea-1396c79e4b28) Jan 23 23:49:30.762: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8503.svc.cluster.local from pod dns-8503/dns-test-03f61e43-6880-4597-9dea-1396c79e4b28: the server could not find the requested resource (get pods dns-test-03f61e43-6880-4597-9dea-1396c79e4b28) Jan 23 23:49:30.799: INFO: Lookups using dns-8503/dns-test-03f61e43-6880-4597-9dea-1396c79e4b28 failed for: [wheezy_udp@dns-test-service.dns-8503.svc.cluster.local wheezy_tcp@dns-test-service.dns-8503.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8503.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8503.svc.cluster.local jessie_udp@dns-test-service.dns-8503.svc.cluster.local jessie_tcp@dns-test-service.dns-8503.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8503.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8503.svc.cluster.local] Jan 23 23:49:35.705: INFO: Unable to read wheezy_udp@dns-test-service.dns-8503.svc.cluster.local from pod dns-8503/dns-test-03f61e43-6880-4597-9dea-1396c79e4b28: the server could not find the requested resource (get pods dns-test-03f61e43-6880-4597-9dea-1396c79e4b28) Jan 23 23:49:35.710: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8503.svc.cluster.local from pod dns-8503/dns-test-03f61e43-6880-4597-9dea-1396c79e4b28: the server could not find the requested resource (get pods dns-test-03f61e43-6880-4597-9dea-1396c79e4b28) Jan 23 23:49:35.716: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8503.svc.cluster.local from pod dns-8503/dns-test-03f61e43-6880-4597-9dea-1396c79e4b28: the server could not find the requested resource (get pods dns-test-03f61e43-6880-4597-9dea-1396c79e4b28) Jan 23 23:49:35.722: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8503.svc.cluster.local from pod dns-8503/dns-test-03f61e43-6880-4597-9dea-1396c79e4b28: the server could not find the requested resource (get pods dns-test-03f61e43-6880-4597-9dea-1396c79e4b28) Jan 23 23:49:35.767: INFO: Unable to read jessie_udp@dns-test-service.dns-8503.svc.cluster.local from pod dns-8503/dns-test-03f61e43-6880-4597-9dea-1396c79e4b28: the server could not find the requested resource (get pods dns-test-03f61e43-6880-4597-9dea-1396c79e4b28) Jan 23 23:49:35.772: INFO: Unable to read jessie_tcp@dns-test-service.dns-8503.svc.cluster.local from pod dns-8503/dns-test-03f61e43-6880-4597-9dea-1396c79e4b28: the server could not find the requested resource (get pods dns-test-03f61e43-6880-4597-9dea-1396c79e4b28) Jan 23 23:49:35.779: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8503.svc.cluster.local from pod dns-8503/dns-test-03f61e43-6880-4597-9dea-1396c79e4b28: the server could not find the requested resource (get pods dns-test-03f61e43-6880-4597-9dea-1396c79e4b28) Jan 23 23:49:35.784: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8503.svc.cluster.local from pod dns-8503/dns-test-03f61e43-6880-4597-9dea-1396c79e4b28: the server could not find the requested resource (get pods dns-test-03f61e43-6880-4597-9dea-1396c79e4b28) Jan 23 23:49:35.840: INFO: Lookups using dns-8503/dns-test-03f61e43-6880-4597-9dea-1396c79e4b28 failed for: [wheezy_udp@dns-test-service.dns-8503.svc.cluster.local wheezy_tcp@dns-test-service.dns-8503.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8503.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8503.svc.cluster.local jessie_udp@dns-test-service.dns-8503.svc.cluster.local jessie_tcp@dns-test-service.dns-8503.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8503.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8503.svc.cluster.local] Jan 23 23:49:40.706: INFO: Unable to read wheezy_udp@dns-test-service.dns-8503.svc.cluster.local from pod dns-8503/dns-test-03f61e43-6880-4597-9dea-1396c79e4b28: the server could not find the requested resource (get pods dns-test-03f61e43-6880-4597-9dea-1396c79e4b28) Jan 23 23:49:40.713: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8503.svc.cluster.local from pod dns-8503/dns-test-03f61e43-6880-4597-9dea-1396c79e4b28: the server could not find the requested resource (get pods dns-test-03f61e43-6880-4597-9dea-1396c79e4b28) Jan 23 23:49:40.718: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8503.svc.cluster.local from pod dns-8503/dns-test-03f61e43-6880-4597-9dea-1396c79e4b28: the server could not find the requested resource (get pods dns-test-03f61e43-6880-4597-9dea-1396c79e4b28) Jan 23 23:49:40.723: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8503.svc.cluster.local from pod dns-8503/dns-test-03f61e43-6880-4597-9dea-1396c79e4b28: the server could not find the requested resource (get pods dns-test-03f61e43-6880-4597-9dea-1396c79e4b28) Jan 23 23:49:40.755: INFO: Unable to read jessie_udp@dns-test-service.dns-8503.svc.cluster.local from pod dns-8503/dns-test-03f61e43-6880-4597-9dea-1396c79e4b28: the server could not find the requested resource (get pods dns-test-03f61e43-6880-4597-9dea-1396c79e4b28) Jan 23 23:49:40.764: INFO: Unable to read jessie_tcp@dns-test-service.dns-8503.svc.cluster.local from pod dns-8503/dns-test-03f61e43-6880-4597-9dea-1396c79e4b28: the server could not find the requested resource (get pods dns-test-03f61e43-6880-4597-9dea-1396c79e4b28) Jan 23 23:49:40.781: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8503.svc.cluster.local from pod dns-8503/dns-test-03f61e43-6880-4597-9dea-1396c79e4b28: the server could not find the requested resource (get pods dns-test-03f61e43-6880-4597-9dea-1396c79e4b28) Jan 23 23:49:40.787: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8503.svc.cluster.local from pod dns-8503/dns-test-03f61e43-6880-4597-9dea-1396c79e4b28: the server could not find the requested resource (get pods dns-test-03f61e43-6880-4597-9dea-1396c79e4b28) Jan 23 23:49:40.828: INFO: Lookups using dns-8503/dns-test-03f61e43-6880-4597-9dea-1396c79e4b28 failed for: [wheezy_udp@dns-test-service.dns-8503.svc.cluster.local wheezy_tcp@dns-test-service.dns-8503.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8503.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8503.svc.cluster.local jessie_udp@dns-test-service.dns-8503.svc.cluster.local jessie_tcp@dns-test-service.dns-8503.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8503.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8503.svc.cluster.local] Jan 23 23:49:45.708: INFO: Unable to read wheezy_udp@dns-test-service.dns-8503.svc.cluster.local from pod dns-8503/dns-test-03f61e43-6880-4597-9dea-1396c79e4b28: the server could not find the requested resource (get pods dns-test-03f61e43-6880-4597-9dea-1396c79e4b28) Jan 23 23:49:45.714: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8503.svc.cluster.local from pod dns-8503/dns-test-03f61e43-6880-4597-9dea-1396c79e4b28: the server could not find the requested resource (get pods dns-test-03f61e43-6880-4597-9dea-1396c79e4b28) Jan 23 23:49:45.729: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8503.svc.cluster.local from pod dns-8503/dns-test-03f61e43-6880-4597-9dea-1396c79e4b28: the server could not find the requested resource (get pods dns-test-03f61e43-6880-4597-9dea-1396c79e4b28) Jan 23 23:49:45.739: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8503.svc.cluster.local from pod dns-8503/dns-test-03f61e43-6880-4597-9dea-1396c79e4b28: the server could not find the requested resource (get pods dns-test-03f61e43-6880-4597-9dea-1396c79e4b28) Jan 23 23:49:45.786: INFO: Unable to read jessie_udp@dns-test-service.dns-8503.svc.cluster.local from pod dns-8503/dns-test-03f61e43-6880-4597-9dea-1396c79e4b28: the server could not find the requested resource (get pods dns-test-03f61e43-6880-4597-9dea-1396c79e4b28) Jan 23 23:49:45.793: INFO: Unable to read jessie_tcp@dns-test-service.dns-8503.svc.cluster.local from pod dns-8503/dns-test-03f61e43-6880-4597-9dea-1396c79e4b28: the server could not find the requested resource (get pods dns-test-03f61e43-6880-4597-9dea-1396c79e4b28) Jan 23 23:49:45.807: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8503.svc.cluster.local from pod dns-8503/dns-test-03f61e43-6880-4597-9dea-1396c79e4b28: the server could not find the requested resource (get pods dns-test-03f61e43-6880-4597-9dea-1396c79e4b28) Jan 23 23:49:45.812: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8503.svc.cluster.local from pod dns-8503/dns-test-03f61e43-6880-4597-9dea-1396c79e4b28: the server could not find the requested resource (get pods dns-test-03f61e43-6880-4597-9dea-1396c79e4b28) Jan 23 23:49:45.836: INFO: Lookups using dns-8503/dns-test-03f61e43-6880-4597-9dea-1396c79e4b28 failed for: [wheezy_udp@dns-test-service.dns-8503.svc.cluster.local wheezy_tcp@dns-test-service.dns-8503.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8503.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8503.svc.cluster.local jessie_udp@dns-test-service.dns-8503.svc.cluster.local jessie_tcp@dns-test-service.dns-8503.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8503.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8503.svc.cluster.local] Jan 23 23:49:50.709: INFO: Unable to read wheezy_udp@dns-test-service.dns-8503.svc.cluster.local from pod dns-8503/dns-test-03f61e43-6880-4597-9dea-1396c79e4b28: the server could not find the requested resource (get pods dns-test-03f61e43-6880-4597-9dea-1396c79e4b28) Jan 23 23:49:50.715: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8503.svc.cluster.local from pod dns-8503/dns-test-03f61e43-6880-4597-9dea-1396c79e4b28: the server could not find the requested resource (get pods dns-test-03f61e43-6880-4597-9dea-1396c79e4b28) Jan 23 23:49:50.721: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8503.svc.cluster.local from pod dns-8503/dns-test-03f61e43-6880-4597-9dea-1396c79e4b28: the server could not find the requested resource (get pods dns-test-03f61e43-6880-4597-9dea-1396c79e4b28) Jan 23 23:49:50.726: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8503.svc.cluster.local from pod dns-8503/dns-test-03f61e43-6880-4597-9dea-1396c79e4b28: the server could not find the requested resource (get pods dns-test-03f61e43-6880-4597-9dea-1396c79e4b28) Jan 23 23:49:50.763: INFO: Unable to read jessie_udp@dns-test-service.dns-8503.svc.cluster.local from pod dns-8503/dns-test-03f61e43-6880-4597-9dea-1396c79e4b28: the server could not find the requested resource (get pods dns-test-03f61e43-6880-4597-9dea-1396c79e4b28) Jan 23 23:49:50.768: INFO: Unable to read jessie_tcp@dns-test-service.dns-8503.svc.cluster.local from pod dns-8503/dns-test-03f61e43-6880-4597-9dea-1396c79e4b28: the server could not find the requested resource (get pods dns-test-03f61e43-6880-4597-9dea-1396c79e4b28) Jan 23 23:49:50.794: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8503.svc.cluster.local from pod dns-8503/dns-test-03f61e43-6880-4597-9dea-1396c79e4b28: the server could not find the requested resource (get pods dns-test-03f61e43-6880-4597-9dea-1396c79e4b28) Jan 23 23:49:50.798: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8503.svc.cluster.local from pod dns-8503/dns-test-03f61e43-6880-4597-9dea-1396c79e4b28: the server could not find the requested resource (get pods dns-test-03f61e43-6880-4597-9dea-1396c79e4b28) Jan 23 23:49:50.822: INFO: Lookups using dns-8503/dns-test-03f61e43-6880-4597-9dea-1396c79e4b28 failed for: [wheezy_udp@dns-test-service.dns-8503.svc.cluster.local wheezy_tcp@dns-test-service.dns-8503.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8503.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8503.svc.cluster.local jessie_udp@dns-test-service.dns-8503.svc.cluster.local jessie_tcp@dns-test-service.dns-8503.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8503.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8503.svc.cluster.local] Jan 23 23:49:55.848: INFO: DNS probes using dns-8503/dns-test-03f61e43-6880-4597-9dea-1396c79e4b28 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 23 23:49:56.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8503" for this suite. • [SLOW TEST:42.967 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":278,"completed":38,"skipped":598,"failed":0} [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 23 23:49:56.280: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating projection with secret that has name projected-secret-test-map-780dac69-902f-4d03-adee-b08ccbfde78d STEP: Creating a pod to test consume secrets Jan 23 23:49:56.491: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-7fb2f9b7-3a75-4169-be0c-0d2f19c5ff68" in namespace "projected-7546" to be "success or failure" Jan 23 23:49:56.530: INFO: Pod "pod-projected-secrets-7fb2f9b7-3a75-4169-be0c-0d2f19c5ff68": Phase="Pending", Reason="", readiness=false. Elapsed: 38.723625ms Jan 23 23:49:58.539: INFO: Pod "pod-projected-secrets-7fb2f9b7-3a75-4169-be0c-0d2f19c5ff68": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048238282s Jan 23 23:50:00.545: INFO: Pod "pod-projected-secrets-7fb2f9b7-3a75-4169-be0c-0d2f19c5ff68": Phase="Pending", Reason="", readiness=false. Elapsed: 4.054121676s Jan 23 23:50:02.551: INFO: Pod "pod-projected-secrets-7fb2f9b7-3a75-4169-be0c-0d2f19c5ff68": Phase="Pending", Reason="", readiness=false. Elapsed: 6.060522948s Jan 23 23:50:04.563: INFO: Pod "pod-projected-secrets-7fb2f9b7-3a75-4169-be0c-0d2f19c5ff68": Phase="Pending", Reason="", readiness=false. Elapsed: 8.072309419s Jan 23 23:50:06.573: INFO: Pod "pod-projected-secrets-7fb2f9b7-3a75-4169-be0c-0d2f19c5ff68": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.082423693s STEP: Saw pod success Jan 23 23:50:06.573: INFO: Pod "pod-projected-secrets-7fb2f9b7-3a75-4169-be0c-0d2f19c5ff68" satisfied condition "success or failure" Jan 23 23:50:06.578: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-7fb2f9b7-3a75-4169-be0c-0d2f19c5ff68 container projected-secret-volume-test: STEP: delete the pod Jan 23 23:50:06.669: INFO: Waiting for pod pod-projected-secrets-7fb2f9b7-3a75-4169-be0c-0d2f19c5ff68 to disappear Jan 23 23:50:06.686: INFO: Pod pod-projected-secrets-7fb2f9b7-3a75-4169-be0c-0d2f19c5ff68 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 23 23:50:06.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7546" for this suite. • [SLOW TEST:10.462 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":39,"skipped":598,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 23 23:50:06.743: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 23 23:50:14.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-8900" for this suite. • [SLOW TEST:8.208 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":278,"completed":40,"skipped":615,"failed":0} SSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 23 23:50:14.951: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 23 23:50:15.702: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 23 23:50:17.717: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715420215, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715420215, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715420215, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715420215, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 23 23:50:19.724: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715420215, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715420215, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715420215, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715420215, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 23 23:50:21.723: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715420215, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715420215, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715420215, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715420215, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 23 23:50:24.739: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 23 23:50:24.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1855" for this suite. STEP: Destroying namespace "webhook-1855-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:10.030 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":278,"completed":41,"skipped":620,"failed":0} S ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 23 23:50:24.982: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating projection with secret that has name projected-secret-test-73e15d28-498f-4ae7-a4c1-8e83ea5af53f STEP: Creating a pod to test consume secrets Jan 23 23:50:25.232: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-0ddf89b0-212f-48c8-bf64-1dc8cae79e10" in namespace "projected-5348" to be "success or failure" Jan 23 23:50:25.239: INFO: Pod "pod-projected-secrets-0ddf89b0-212f-48c8-bf64-1dc8cae79e10": Phase="Pending", Reason="", readiness=false. Elapsed: 6.596858ms Jan 23 23:50:27.247: INFO: Pod "pod-projected-secrets-0ddf89b0-212f-48c8-bf64-1dc8cae79e10": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014684389s Jan 23 23:50:29.260: INFO: Pod "pod-projected-secrets-0ddf89b0-212f-48c8-bf64-1dc8cae79e10": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028029267s Jan 23 23:50:31.266: INFO: Pod "pod-projected-secrets-0ddf89b0-212f-48c8-bf64-1dc8cae79e10": Phase="Pending", Reason="", readiness=false. Elapsed: 6.033838771s Jan 23 23:50:33.272: INFO: Pod "pod-projected-secrets-0ddf89b0-212f-48c8-bf64-1dc8cae79e10": Phase="Pending", Reason="", readiness=false. Elapsed: 8.039963335s Jan 23 23:50:35.285: INFO: Pod "pod-projected-secrets-0ddf89b0-212f-48c8-bf64-1dc8cae79e10": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.053354102s STEP: Saw pod success Jan 23 23:50:35.286: INFO: Pod "pod-projected-secrets-0ddf89b0-212f-48c8-bf64-1dc8cae79e10" satisfied condition "success or failure" Jan 23 23:50:35.290: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-0ddf89b0-212f-48c8-bf64-1dc8cae79e10 container projected-secret-volume-test: STEP: delete the pod Jan 23 23:50:35.332: INFO: Waiting for pod pod-projected-secrets-0ddf89b0-212f-48c8-bf64-1dc8cae79e10 to disappear Jan 23 23:50:35.339: INFO: Pod pod-projected-secrets-0ddf89b0-212f-48c8-bf64-1dc8cae79e10 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 23 23:50:35.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5348" for this suite. • [SLOW TEST:10.517 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":42,"skipped":621,"failed":0} SSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 23 23:50:35.500: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:279 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Starting the proxy Jan 23 23:50:35.692: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix464522107/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 23 23:50:35.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3681" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":278,"completed":43,"skipped":628,"failed":0} SS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 23 23:50:35.814: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Jan 23 23:50:35.995: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Jan 23 23:50:39.017: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2395 create -f -' Jan 23 23:50:42.088: INFO: stderr: "" Jan 23 23:50:42.088: INFO: stdout: "e2e-test-crd-publish-openapi-5546-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Jan 23 23:50:42.089: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2395 delete e2e-test-crd-publish-openapi-5546-crds test-cr' Jan 23 23:50:42.274: INFO: stderr: "" Jan 23 23:50:42.274: INFO: stdout: "e2e-test-crd-publish-openapi-5546-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Jan 23 23:50:42.274: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2395 apply -f -' Jan 23 23:50:42.564: INFO: stderr: "" Jan 23 23:50:42.564: INFO: stdout: "e2e-test-crd-publish-openapi-5546-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Jan 23 23:50:42.565: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2395 delete e2e-test-crd-publish-openapi-5546-crds test-cr' Jan 23 23:50:42.694: INFO: stderr: "" Jan 23 23:50:42.694: INFO: stdout: "e2e-test-crd-publish-openapi-5546-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Jan 23 23:50:42.694: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5546-crds' Jan 23 23:50:42.949: INFO: stderr: "" Jan 23 23:50:42.949: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5546-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 23 23:50:44.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2395" for this suite. • [SLOW TEST:9.067 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":278,"completed":44,"skipped":630,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 23 23:50:44.882: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Jan 23 23:50:45.000: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-8406 /api/v1/namespaces/watch-8406/configmaps/e2e-watch-test-watch-closed c4465af6-95a9-4040-b904-b0e2f7d38407 3905751 0 2020-01-23 23:50:44 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Jan 23 23:50:45.001: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-8406 /api/v1/namespaces/watch-8406/configmaps/e2e-watch-test-watch-closed c4465af6-95a9-4040-b904-b0e2f7d38407 3905752 0 2020-01-23 23:50:44 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Jan 23 23:50:45.018: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-8406 /api/v1/namespaces/watch-8406/configmaps/e2e-watch-test-watch-closed c4465af6-95a9-4040-b904-b0e2f7d38407 3905753 0 2020-01-23 23:50:44 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jan 23 23:50:45.018: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-8406 /api/v1/namespaces/watch-8406/configmaps/e2e-watch-test-watch-closed c4465af6-95a9-4040-b904-b0e2f7d38407 3905754 0 2020-01-23 23:50:44 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 23 23:50:45.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8406" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":278,"completed":45,"skipped":655,"failed":0} SS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 23 23:50:45.029: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name cm-test-opt-del-3b1103e1-0829-40fc-8ebf-7f26fdff5612 STEP: Creating configMap with name cm-test-opt-upd-3b4de3ca-e012-4ba9-8d40-23d01a72ec9e STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-3b1103e1-0829-40fc-8ebf-7f26fdff5612 STEP: Updating configmap cm-test-opt-upd-3b4de3ca-e012-4ba9-8d40-23d01a72ec9e STEP: Creating configMap with name cm-test-opt-create-f4725627-31fd-4ba9-bceb-cbbfc88ca6d0 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 23 23:52:16.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-507" for this suite. • [SLOW TEST:91.919 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":46,"skipped":657,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 23 23:52:16.949: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:279 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: starting the proxy server Jan 23 23:52:17.042: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 23 23:52:17.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8805" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":278,"completed":47,"skipped":698,"failed":0} SSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 23 23:52:17.247: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:87 Jan 23 23:52:17.383: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 23 23:52:17.413: INFO: Waiting for terminating namespaces to be deleted... Jan 23 23:52:17.417: INFO: Logging pods the kubelet thinks is on node jerma-node before test Jan 23 23:52:17.425: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded) Jan 23 23:52:17.425: INFO: Container kube-proxy ready: true, restart count 0 Jan 23 23:52:17.425: INFO: pod-configmaps-cf141759-d305-451a-aee2-978cc01e9210 from configmap-507 started at 2020-01-23 23:50:45 +0000 UTC (3 container statuses recorded) Jan 23 23:52:17.425: INFO: Container createcm-volume-test ready: true, restart count 0 Jan 23 23:52:17.425: INFO: Container delcm-volume-test ready: true, restart count 0 Jan 23 23:52:17.425: INFO: Container updcm-volume-test ready: true, restart count 0 Jan 23 23:52:17.425: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded) Jan 23 23:52:17.425: INFO: Container weave ready: true, restart count 1 Jan 23 23:52:17.425: INFO: Container weave-npc ready: true, restart count 0 Jan 23 23:52:17.425: INFO: Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test Jan 23 23:52:17.445: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded) Jan 23 23:52:17.445: INFO: Container coredns ready: true, restart count 0 Jan 23 23:52:17.446: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded) Jan 23 23:52:17.446: INFO: Container coredns ready: true, restart count 0 Jan 23 23:52:17.446: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded) Jan 23 23:52:17.446: INFO: Container kube-controller-manager ready: true, restart count 3 Jan 23 23:52:17.446: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded) Jan 23 23:52:17.446: INFO: Container kube-proxy ready: true, restart count 0 Jan 23 23:52:17.446: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded) Jan 23 23:52:17.446: INFO: Container weave ready: true, restart count 0 Jan 23 23:52:17.446: INFO: Container weave-npc ready: true, restart count 0 Jan 23 23:52:17.446: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded) Jan 23 23:52:17.446: INFO: Container kube-scheduler ready: true, restart count 3 Jan 23 23:52:17.446: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded) Jan 23 23:52:17.446: INFO: Container kube-apiserver ready: true, restart count 1 Jan 23 23:52:17.446: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded) Jan 23 23:52:17.446: INFO: Container etcd ready: true, restart count 1 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: verifying the node has the label node jerma-node STEP: verifying the node has the label node jerma-server-mvvl6gufaqub Jan 23 23:52:17.554: INFO: Pod pod-configmaps-cf141759-d305-451a-aee2-978cc01e9210 requesting resource cpu=0m on Node jerma-node Jan 23 23:52:17.554: INFO: Pod coredns-6955765f44-bhnn4 requesting resource cpu=100m on Node jerma-server-mvvl6gufaqub Jan 23 23:52:17.555: INFO: Pod coredns-6955765f44-bwd85 requesting resource cpu=100m on Node jerma-server-mvvl6gufaqub Jan 23 23:52:17.555: INFO: Pod etcd-jerma-server-mvvl6gufaqub requesting resource cpu=0m on Node jerma-server-mvvl6gufaqub Jan 23 23:52:17.555: INFO: Pod kube-apiserver-jerma-server-mvvl6gufaqub requesting resource cpu=250m on Node jerma-server-mvvl6gufaqub Jan 23 23:52:17.555: INFO: Pod kube-controller-manager-jerma-server-mvvl6gufaqub requesting resource cpu=200m on Node jerma-server-mvvl6gufaqub Jan 23 23:52:17.555: INFO: Pod kube-proxy-chkps requesting resource cpu=0m on Node jerma-server-mvvl6gufaqub Jan 23 23:52:17.555: INFO: Pod kube-proxy-dsf66 requesting resource cpu=0m on Node jerma-node Jan 23 23:52:17.555: INFO: Pod kube-scheduler-jerma-server-mvvl6gufaqub requesting resource cpu=100m on Node jerma-server-mvvl6gufaqub Jan 23 23:52:17.555: INFO: Pod weave-net-kz8lv requesting resource cpu=20m on Node jerma-node Jan 23 23:52:17.555: INFO: Pod weave-net-z6tjf requesting resource cpu=20m on Node jerma-server-mvvl6gufaqub STEP: Starting Pods to consume most of the cluster CPU. Jan 23 23:52:17.555: INFO: Creating a pod which consumes cpu=2786m on Node jerma-node Jan 23 23:52:17.562: INFO: Creating a pod which consumes cpu=2261m on Node jerma-server-mvvl6gufaqub STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-08871da0-5309-44ee-b1f6-8bb69121d55a.15eca9232419dc54], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4573/filler-pod-08871da0-5309-44ee-b1f6-8bb69121d55a to jerma-server-mvvl6gufaqub] STEP: Considering event: Type = [Normal], Name = [filler-pod-08871da0-5309-44ee-b1f6-8bb69121d55a.15eca9242f964265], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-08871da0-5309-44ee-b1f6-8bb69121d55a.15eca924f5f57144], Reason = [Created], Message = [Created container filler-pod-08871da0-5309-44ee-b1f6-8bb69121d55a] STEP: Considering event: Type = [Normal], Name = [filler-pod-08871da0-5309-44ee-b1f6-8bb69121d55a.15eca92510cfa8bc], Reason = [Started], Message = [Started container filler-pod-08871da0-5309-44ee-b1f6-8bb69121d55a] STEP: Considering event: Type = [Normal], Name = [filler-pod-d360f5d3-8bf1-4588-9ffa-aadfcd80affd.15eca9231ca7405f], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4573/filler-pod-d360f5d3-8bf1-4588-9ffa-aadfcd80affd to jerma-node] STEP: Considering event: Type = [Normal], Name = [filler-pod-d360f5d3-8bf1-4588-9ffa-aadfcd80affd.15eca92420982a96], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-d360f5d3-8bf1-4588-9ffa-aadfcd80affd.15eca924cf54ae38], Reason = [Created], Message = [Created container filler-pod-d360f5d3-8bf1-4588-9ffa-aadfcd80affd] STEP: Considering event: Type = [Normal], Name = [filler-pod-d360f5d3-8bf1-4588-9ffa-aadfcd80affd.15eca924e874af97], Reason = [Started], Message = [Started container filler-pod-d360f5d3-8bf1-4588-9ffa-aadfcd80affd] STEP: Considering event: Type = [Warning], Name = [additional-pod.15eca92578b36379], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 Insufficient cpu.] STEP: removing the label node off the node jerma-node STEP: verifying the node doesn't have the label node STEP: removing the label node off the node jerma-server-mvvl6gufaqub STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 23 23:52:28.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4573" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:78 • [SLOW TEST:11.616 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:39 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":278,"completed":48,"skipped":703,"failed":0} SSSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 23 23:52:28.864: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:279 [BeforeEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1383 STEP: creating the pod Jan 23 23:52:29.027: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1449' Jan 23 23:52:29.470: INFO: stderr: "" Jan 23 23:52:29.470: INFO: stdout: "pod/pause created\n" Jan 23 23:52:29.470: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Jan 23 23:52:29.470: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-1449" to be "running and ready" Jan 23 23:52:29.474: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019642ms Jan 23 23:52:31.483: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013010183s Jan 23 23:52:33.527: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.056494277s Jan 23 23:52:35.887: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.416606945s Jan 23 23:52:38.077: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 8.607354002s Jan 23 23:52:40.082: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 10.612245277s Jan 23 23:52:42.089: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 12.618895803s Jan 23 23:52:42.089: INFO: Pod "pause" satisfied condition "running and ready" Jan 23 23:52:42.089: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: adding the label testing-label with value testing-label-value to a pod Jan 23 23:52:42.089: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-1449' Jan 23 23:52:42.374: INFO: stderr: "" Jan 23 23:52:42.374: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Jan 23 23:52:42.375: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-1449' Jan 23 23:52:42.604: INFO: stderr: "" Jan 23 23:52:42.604: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 13s testing-label-value\n" STEP: removing the label testing-label of a pod Jan 23 23:52:42.604: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-1449' Jan 23 23:52:42.777: INFO: stderr: "" Jan 23 23:52:42.777: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Jan 23 23:52:42.778: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-1449' Jan 23 23:52:42.927: INFO: stderr: "" Jan 23 23:52:42.927: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 13s \n" [AfterEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1390 STEP: using delete to clean up resources Jan 23 23:52:42.928: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1449' Jan 23 23:52:43.071: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 23 23:52:43.071: INFO: stdout: "pod \"pause\" force deleted\n" Jan 23 23:52:43.071: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-1449' Jan 23 23:52:43.218: INFO: stderr: "No resources found in kubectl-1449 namespace.\n" Jan 23 23:52:43.218: INFO: stdout: "" Jan 23 23:52:43.218: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-1449 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 23 23:52:43.318: INFO: stderr: "" Jan 23 23:52:43.318: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 23 23:52:43.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1449" for this suite. • [SLOW TEST:14.465 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1380 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":278,"completed":49,"skipped":708,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 23 23:52:43.330: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Jan 23 23:52:44.263: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1fd884e3-02c9-4b04-9a20-cb7f0e3964ea" in namespace "downward-api-6187" to be "success or failure" Jan 23 23:52:44.300: INFO: Pod "downwardapi-volume-1fd884e3-02c9-4b04-9a20-cb7f0e3964ea": Phase="Pending", Reason="", readiness=false. Elapsed: 36.961729ms Jan 23 23:52:46.307: INFO: Pod "downwardapi-volume-1fd884e3-02c9-4b04-9a20-cb7f0e3964ea": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043378156s Jan 23 23:52:48.322: INFO: Pod "downwardapi-volume-1fd884e3-02c9-4b04-9a20-cb7f0e3964ea": Phase="Pending", Reason="", readiness=false. Elapsed: 4.058582141s Jan 23 23:52:50.328: INFO: Pod "downwardapi-volume-1fd884e3-02c9-4b04-9a20-cb7f0e3964ea": Phase="Pending", Reason="", readiness=false. Elapsed: 6.064315354s Jan 23 23:52:52.333: INFO: Pod "downwardapi-volume-1fd884e3-02c9-4b04-9a20-cb7f0e3964ea": Phase="Pending", Reason="", readiness=false. Elapsed: 8.06926286s Jan 23 23:52:54.338: INFO: Pod "downwardapi-volume-1fd884e3-02c9-4b04-9a20-cb7f0e3964ea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.074822201s STEP: Saw pod success Jan 23 23:52:54.338: INFO: Pod "downwardapi-volume-1fd884e3-02c9-4b04-9a20-cb7f0e3964ea" satisfied condition "success or failure" Jan 23 23:52:54.341: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-1fd884e3-02c9-4b04-9a20-cb7f0e3964ea container client-container: STEP: delete the pod Jan 23 23:52:54.375: INFO: Waiting for pod downwardapi-volume-1fd884e3-02c9-4b04-9a20-cb7f0e3964ea to disappear Jan 23 23:52:54.409: INFO: Pod downwardapi-volume-1fd884e3-02c9-4b04-9a20-cb7f0e3964ea no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 23 23:52:54.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6187" for this suite. • [SLOW TEST:11.094 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":50,"skipped":794,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 23 23:52:54.424: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 23 23:53:02.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-8489" for this suite. • [SLOW TEST:8.416 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":278,"completed":51,"skipped":803,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 23 23:53:02.842: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name configmap-test-volume-map-74ade706-a0c4-412e-8cc2-53495aa6e79f STEP: Creating a pod to test consume configMaps Jan 23 23:53:02.989: INFO: Waiting up to 5m0s for pod "pod-configmaps-e329d392-3422-48f4-b5f0-cf26714caa7d" in namespace "configmap-884" to be "success or failure" Jan 23 23:53:03.016: INFO: Pod "pod-configmaps-e329d392-3422-48f4-b5f0-cf26714caa7d": Phase="Pending", Reason="", readiness=false. Elapsed: 27.008039ms Jan 23 23:53:05.020: INFO: Pod "pod-configmaps-e329d392-3422-48f4-b5f0-cf26714caa7d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03147909s Jan 23 23:53:07.025: INFO: Pod "pod-configmaps-e329d392-3422-48f4-b5f0-cf26714caa7d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035707446s Jan 23 23:53:09.029: INFO: Pod "pod-configmaps-e329d392-3422-48f4-b5f0-cf26714caa7d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.040097719s Jan 23 23:53:11.034: INFO: Pod "pod-configmaps-e329d392-3422-48f4-b5f0-cf26714caa7d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.045104882s STEP: Saw pod success Jan 23 23:53:11.034: INFO: Pod "pod-configmaps-e329d392-3422-48f4-b5f0-cf26714caa7d" satisfied condition "success or failure" Jan 23 23:53:11.037: INFO: Trying to get logs from node jerma-node pod pod-configmaps-e329d392-3422-48f4-b5f0-cf26714caa7d container configmap-volume-test: STEP: delete the pod Jan 23 23:53:11.241: INFO: Waiting for pod pod-configmaps-e329d392-3422-48f4-b5f0-cf26714caa7d to disappear Jan 23 23:53:11.247: INFO: Pod pod-configmaps-e329d392-3422-48f4-b5f0-cf26714caa7d no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 23 23:53:11.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-884" for this suite. • [SLOW TEST:8.418 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":278,"completed":52,"skipped":832,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 23 23:53:11.262: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Jan 23 23:53:11.520: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Jan 23 23:53:14.428: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2510 create -f -' Jan 23 23:53:17.728: INFO: stderr: "" Jan 23 23:53:17.728: INFO: stdout: "e2e-test-crd-publish-openapi-6541-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Jan 23 23:53:17.729: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2510 delete e2e-test-crd-publish-openapi-6541-crds test-foo' Jan 23 23:53:17.980: INFO: stderr: "" Jan 23 23:53:17.980: INFO: stdout: "e2e-test-crd-publish-openapi-6541-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Jan 23 23:53:17.980: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2510 apply -f -' Jan 23 23:53:18.284: INFO: stderr: "" Jan 23 23:53:18.285: INFO: stdout: "e2e-test-crd-publish-openapi-6541-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Jan 23 23:53:18.285: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2510 delete e2e-test-crd-publish-openapi-6541-crds test-foo' Jan 23 23:53:18.413: INFO: stderr: "" Jan 23 23:53:18.413: INFO: stdout: "e2e-test-crd-publish-openapi-6541-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Jan 23 23:53:18.413: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2510 create -f -' Jan 23 23:53:18.781: INFO: rc: 1 Jan 23 23:53:18.781: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2510 apply -f -' Jan 23 23:53:19.054: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Jan 23 23:53:19.054: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2510 create -f -' Jan 23 23:53:19.321: INFO: rc: 1 Jan 23 23:53:19.322: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2510 apply -f -' Jan 23 23:53:19.644: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Jan 23 23:53:19.644: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6541-crds' Jan 23 23:53:19.958: INFO: stderr: "" Jan 23 23:53:19.958: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6541-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Jan 23 23:53:19.959: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6541-crds.metadata' Jan 23 23:53:20.231: INFO: stderr: "" Jan 23 23:53:20.231: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6541-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC. Populated by the system.\n Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested. Populated by the system when a graceful deletion is\n requested. Read-only. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server. If this field is specified and the generated name exists, the\n server will NOT return a 409 - instead, it will either return 201 Created\n or 500 with Reason ServerTimeout indicating a unique name could not be\n found in the time allotted, and the client should retry (optionally after\n the time indicated in the Retry-After header). Applied only if Name is not\n specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty. Must\n be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources. Populated by the system.\n Read-only. Value must be treated as opaque by clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n release and the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations. Populated by the system. Read-only.\n More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Jan 23 23:53:20.232: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6541-crds.spec' Jan 23 23:53:20.559: INFO: stderr: "" Jan 23 23:53:20.559: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6541-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Jan 23 23:53:20.559: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6541-crds.spec.bars' Jan 23 23:53:20.837: INFO: stderr: "" Jan 23 23:53:20.837: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6541-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Jan 23 23:53:20.838: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6541-crds.spec.bars2' Jan 23 23:53:21.121: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 23 23:53:24.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2510" for this suite. • [SLOW TEST:12.906 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":278,"completed":53,"skipped":865,"failed":0} SSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 23 23:53:24.169: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: getting the auto-created API token STEP: reading a file in the container Jan 23 23:53:32.810: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-5216 pod-service-account-fd4b5041-a025-45d0-8689-8717f9c2e3f3 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Jan 23 23:53:33.128: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-5216 pod-service-account-fd4b5041-a025-45d0-8689-8717f9c2e3f3 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Jan 23 23:53:33.428: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-5216 pod-service-account-fd4b5041-a025-45d0-8689-8717f9c2e3f3 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 23 23:53:33.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-5216" for this suite. • [SLOW TEST:9.615 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":278,"completed":54,"skipped":870,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 23 23:53:33.784: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: set up a multi version CRD Jan 23 23:53:33.967: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 23 23:53:50.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4579" for this suite. • [SLOW TEST:16.712 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":278,"completed":55,"skipped":877,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 23 23:53:50.498: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test substitution in container's command Jan 23 23:53:50.641: INFO: Waiting up to 5m0s for pod "var-expansion-fa4db44f-2618-4457-b8c5-425141e3fa96" in namespace "var-expansion-9673" to be "success or failure" Jan 23 23:53:50.709: INFO: Pod "var-expansion-fa4db44f-2618-4457-b8c5-425141e3fa96": Phase="Pending", Reason="", readiness=false. Elapsed: 68.706897ms Jan 23 23:53:52.717: INFO: Pod "var-expansion-fa4db44f-2618-4457-b8c5-425141e3fa96": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076193825s Jan 23 23:53:54.722: INFO: Pod "var-expansion-fa4db44f-2618-4457-b8c5-425141e3fa96": Phase="Pending", Reason="", readiness=false. Elapsed: 4.080909132s Jan 23 23:53:56.728: INFO: Pod "var-expansion-fa4db44f-2618-4457-b8c5-425141e3fa96": Phase="Pending", Reason="", readiness=false. Elapsed: 6.08762122s Jan 23 23:53:58.739: INFO: Pod "var-expansion-fa4db44f-2618-4457-b8c5-425141e3fa96": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.098026354s STEP: Saw pod success Jan 23 23:53:58.739: INFO: Pod "var-expansion-fa4db44f-2618-4457-b8c5-425141e3fa96" satisfied condition "success or failure" Jan 23 23:53:58.743: INFO: Trying to get logs from node jerma-node pod var-expansion-fa4db44f-2618-4457-b8c5-425141e3fa96 container dapi-container: STEP: delete the pod Jan 23 23:53:58.819: INFO: Waiting for pod var-expansion-fa4db44f-2618-4457-b8c5-425141e3fa96 to disappear Jan 23 23:53:58.904: INFO: Pod var-expansion-fa4db44f-2618-4457-b8c5-425141e3fa96 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 23 23:53:58.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-9673" for this suite. • [SLOW TEST:8.418 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":278,"completed":56,"skipped":902,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 23 23:53:58.916: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:87 Jan 23 23:53:58.974: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 23 23:53:59.042: INFO: Waiting for terminating namespaces to be deleted... Jan 23 23:53:59.054: INFO: Logging pods the kubelet thinks is on node jerma-node before test Jan 23 23:53:59.060: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded) Jan 23 23:53:59.060: INFO: Container kube-proxy ready: true, restart count 0 Jan 23 23:53:59.060: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded) Jan 23 23:53:59.060: INFO: Container weave ready: true, restart count 1 Jan 23 23:53:59.060: INFO: Container weave-npc ready: true, restart count 0 Jan 23 23:53:59.060: INFO: Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test Jan 23 23:53:59.077: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded) Jan 23 23:53:59.077: INFO: Container kube-scheduler ready: true, restart count 3 Jan 23 23:53:59.077: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded) Jan 23 23:53:59.077: INFO: Container kube-apiserver ready: true, restart count 1 Jan 23 23:53:59.077: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded) Jan 23 23:53:59.077: INFO: Container etcd ready: true, restart count 1 Jan 23 23:53:59.077: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded) Jan 23 23:53:59.077: INFO: Container coredns ready: true, restart count 0 Jan 23 23:53:59.077: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded) Jan 23 23:53:59.077: INFO: Container coredns ready: true, restart count 0 Jan 23 23:53:59.077: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded) Jan 23 23:53:59.077: INFO: Container kube-controller-manager ready: true, restart count 3 Jan 23 23:53:59.077: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded) Jan 23 23:53:59.077: INFO: Container kube-proxy ready: true, restart count 0 Jan 23 23:53:59.077: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded) Jan 23 23:53:59.077: INFO: Container weave ready: true, restart count 0 Jan 23 23:53:59.077: INFO: Container weave-npc ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-4f8f718c-a86c-4b98-a671-9ec8bf5698ed 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-4f8f718c-a86c-4b98-a671-9ec8bf5698ed off the node jerma-node STEP: verifying the node doesn't have the label kubernetes.io/e2e-4f8f718c-a86c-4b98-a671-9ec8bf5698ed [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 23 23:54:33.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-2814" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:78 • [SLOW TEST:34.538 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:39 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":278,"completed":57,"skipped":915,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 23 23:54:33.455: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test emptydir 0644 on node default medium Jan 23 23:54:33.674: INFO: Waiting up to 5m0s for pod "pod-81388fca-d84d-4444-ba7d-ff61f0df7f51" in namespace "emptydir-7385" to be "success or failure" Jan 23 23:54:33.704: INFO: Pod "pod-81388fca-d84d-4444-ba7d-ff61f0df7f51": Phase="Pending", Reason="", readiness=false. Elapsed: 29.079718ms Jan 23 23:54:35.719: INFO: Pod "pod-81388fca-d84d-4444-ba7d-ff61f0df7f51": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044446541s Jan 23 23:54:37.725: INFO: Pod "pod-81388fca-d84d-4444-ba7d-ff61f0df7f51": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050469462s Jan 23 23:54:39.728: INFO: Pod "pod-81388fca-d84d-4444-ba7d-ff61f0df7f51": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053218077s Jan 23 23:54:41.734: INFO: Pod "pod-81388fca-d84d-4444-ba7d-ff61f0df7f51": Phase="Pending", Reason="", readiness=false. Elapsed: 8.059319713s Jan 23 23:54:43.737: INFO: Pod "pod-81388fca-d84d-4444-ba7d-ff61f0df7f51": Phase="Pending", Reason="", readiness=false. Elapsed: 10.062867035s Jan 23 23:54:45.743: INFO: Pod "pod-81388fca-d84d-4444-ba7d-ff61f0df7f51": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.068986222s STEP: Saw pod success Jan 23 23:54:45.743: INFO: Pod "pod-81388fca-d84d-4444-ba7d-ff61f0df7f51" satisfied condition "success or failure" Jan 23 23:54:45.747: INFO: Trying to get logs from node jerma-node pod pod-81388fca-d84d-4444-ba7d-ff61f0df7f51 container test-container: STEP: delete the pod Jan 23 23:54:45.790: INFO: Waiting for pod pod-81388fca-d84d-4444-ba7d-ff61f0df7f51 to disappear Jan 23 23:54:45.804: INFO: Pod pod-81388fca-d84d-4444-ba7d-ff61f0df7f51 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 23 23:54:45.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7385" for this suite. • [SLOW TEST:12.363 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":58,"skipped":938,"failed":0} SS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 23 23:54:45.819: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-5746 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating stateful set ss in namespace statefulset-5746 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-5746 Jan 23 23:54:46.117: INFO: Found 0 stateful pods, waiting for 1 Jan 23 23:54:56.124: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Jan 23 23:54:56.130: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5746 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 23 23:54:56.688: INFO: stderr: "I0123 23:54:56.351895 1208 log.go:172] (0xc0003b42c0) (0xc0004f7720) Create stream\nI0123 23:54:56.352073 1208 log.go:172] (0xc0003b42c0) (0xc0004f7720) Stream added, broadcasting: 1\nI0123 23:54:56.356885 1208 log.go:172] (0xc0003b42c0) Reply frame received for 1\nI0123 23:54:56.356946 1208 log.go:172] (0xc0003b42c0) (0xc0008e0000) Create stream\nI0123 23:54:56.356991 1208 log.go:172] (0xc0003b42c0) (0xc0008e0000) Stream added, broadcasting: 3\nI0123 23:54:56.358997 1208 log.go:172] (0xc0003b42c0) Reply frame received for 3\nI0123 23:54:56.359135 1208 log.go:172] (0xc0003b42c0) (0xc00086a000) Create stream\nI0123 23:54:56.359227 1208 log.go:172] (0xc0003b42c0) (0xc00086a000) Stream added, broadcasting: 5\nI0123 23:54:56.363838 1208 log.go:172] (0xc0003b42c0) Reply frame received for 5\nI0123 23:54:56.444364 1208 log.go:172] (0xc0003b42c0) Data frame received for 5\nI0123 23:54:56.444424 1208 log.go:172] (0xc00086a000) (5) Data frame handling\nI0123 23:54:56.444439 1208 log.go:172] (0xc00086a000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0123 23:54:56.523386 1208 log.go:172] (0xc0003b42c0) Data frame received for 3\nI0123 23:54:56.523544 1208 log.go:172] (0xc0008e0000) (3) Data frame handling\nI0123 23:54:56.523589 1208 log.go:172] (0xc0008e0000) (3) Data frame sent\nI0123 23:54:56.677919 1208 log.go:172] (0xc0003b42c0) Data frame received for 1\nI0123 23:54:56.678025 1208 log.go:172] (0xc0004f7720) (1) Data frame handling\nI0123 23:54:56.678059 1208 log.go:172] (0xc0004f7720) (1) Data frame sent\nI0123 23:54:56.678219 1208 log.go:172] (0xc0003b42c0) (0xc00086a000) Stream removed, broadcasting: 5\nI0123 23:54:56.678391 1208 log.go:172] (0xc0003b42c0) (0xc0008e0000) Stream removed, broadcasting: 3\nI0123 23:54:56.678638 1208 log.go:172] (0xc0003b42c0) (0xc0004f7720) Stream removed, broadcasting: 1\nI0123 23:54:56.678716 1208 log.go:172] (0xc0003b42c0) Go away received\nI0123 23:54:56.680011 1208 log.go:172] (0xc0003b42c0) (0xc0004f7720) Stream removed, broadcasting: 1\nI0123 23:54:56.680058 1208 log.go:172] (0xc0003b42c0) (0xc0008e0000) Stream removed, broadcasting: 3\nI0123 23:54:56.680070 1208 log.go:172] (0xc0003b42c0) (0xc00086a000) Stream removed, broadcasting: 5\n" Jan 23 23:54:56.688: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 23 23:54:56.688: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 23 23:54:56.693: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jan 23 23:55:06.701: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 23 23:55:06.702: INFO: Waiting for statefulset status.replicas updated to 0 Jan 23 23:55:06.866: INFO: POD NODE PHASE GRACE CONDITIONS Jan 23 23:55:06.866: INFO: ss-0 jerma-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 23:54:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 23:54:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 23:54:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 23:54:46 +0000 UTC }] Jan 23 23:55:06.866: INFO: Jan 23 23:55:06.866: INFO: StatefulSet ss has not reached scale 3, at 1 Jan 23 23:55:07.873: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.848933698s Jan 23 23:55:09.226: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.842183616s Jan 23 23:55:10.395: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.48930928s Jan 23 23:55:11.403: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.320113375s Jan 23 23:55:12.415: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.311908607s Jan 23 23:55:14.311: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.299842765s Jan 23 23:55:15.688: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.404490107s Jan 23 23:55:16.693: INFO: Verifying statefulset ss doesn't scale past 3 for another 26.981951ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-5746 Jan 23 23:55:17.698: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5746 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 23 23:55:18.140: INFO: stderr: "I0123 23:55:17.946922 1228 log.go:172] (0xc0009c6c60) (0xc000a4e280) Create stream\nI0123 23:55:17.947250 1228 log.go:172] (0xc0009c6c60) (0xc000a4e280) Stream added, broadcasting: 1\nI0123 23:55:17.956383 1228 log.go:172] (0xc0009c6c60) Reply frame received for 1\nI0123 23:55:17.956468 1228 log.go:172] (0xc0009c6c60) (0xc000a4e320) Create stream\nI0123 23:55:17.956493 1228 log.go:172] (0xc0009c6c60) (0xc000a4e320) Stream added, broadcasting: 3\nI0123 23:55:17.958608 1228 log.go:172] (0xc0009c6c60) Reply frame received for 3\nI0123 23:55:17.958647 1228 log.go:172] (0xc0009c6c60) (0xc000988500) Create stream\nI0123 23:55:17.958656 1228 log.go:172] (0xc0009c6c60) (0xc000988500) Stream added, broadcasting: 5\nI0123 23:55:17.962803 1228 log.go:172] (0xc0009c6c60) Reply frame received for 5\nI0123 23:55:18.036638 1228 log.go:172] (0xc0009c6c60) Data frame received for 5\nI0123 23:55:18.036677 1228 log.go:172] (0xc000988500) (5) Data frame handling\nI0123 23:55:18.036698 1228 log.go:172] (0xc000988500) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0123 23:55:18.037086 1228 log.go:172] (0xc0009c6c60) Data frame received for 3\nI0123 23:55:18.037122 1228 log.go:172] (0xc000a4e320) (3) Data frame handling\nI0123 23:55:18.037138 1228 log.go:172] (0xc000a4e320) (3) Data frame sent\nI0123 23:55:18.132718 1228 log.go:172] (0xc0009c6c60) (0xc000a4e320) Stream removed, broadcasting: 3\nI0123 23:55:18.132812 1228 log.go:172] (0xc0009c6c60) Data frame received for 1\nI0123 23:55:18.132839 1228 log.go:172] (0xc0009c6c60) (0xc000988500) Stream removed, broadcasting: 5\nI0123 23:55:18.132866 1228 log.go:172] (0xc000a4e280) (1) Data frame handling\nI0123 23:55:18.132879 1228 log.go:172] (0xc000a4e280) (1) Data frame sent\nI0123 23:55:18.132885 1228 log.go:172] (0xc0009c6c60) (0xc000a4e280) Stream removed, broadcasting: 1\nI0123 23:55:18.132894 1228 log.go:172] (0xc0009c6c60) Go away received\nI0123 23:55:18.133476 1228 log.go:172] (0xc0009c6c60) (0xc000a4e280) Stream removed, broadcasting: 1\nI0123 23:55:18.133490 1228 log.go:172] (0xc0009c6c60) (0xc000a4e320) Stream removed, broadcasting: 3\nI0123 23:55:18.133497 1228 log.go:172] (0xc0009c6c60) (0xc000988500) Stream removed, broadcasting: 5\n" Jan 23 23:55:18.140: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 23 23:55:18.140: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 23 23:55:18.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5746 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 23 23:55:18.760: INFO: stderr: "I0123 23:55:18.469787 1247 log.go:172] (0xc00051a0b0) (0xc00094a640) Create stream\nI0123 23:55:18.470126 1247 log.go:172] (0xc00051a0b0) (0xc00094a640) Stream added, broadcasting: 1\nI0123 23:55:18.475321 1247 log.go:172] (0xc00051a0b0) Reply frame received for 1\nI0123 23:55:18.475380 1247 log.go:172] (0xc00051a0b0) (0xc000612460) Create stream\nI0123 23:55:18.475399 1247 log.go:172] (0xc00051a0b0) (0xc000612460) Stream added, broadcasting: 3\nI0123 23:55:18.477476 1247 log.go:172] (0xc00051a0b0) Reply frame received for 3\nI0123 23:55:18.477514 1247 log.go:172] (0xc00051a0b0) (0xc00098bc20) Create stream\nI0123 23:55:18.477520 1247 log.go:172] (0xc00051a0b0) (0xc00098bc20) Stream added, broadcasting: 5\nI0123 23:55:18.478604 1247 log.go:172] (0xc00051a0b0) Reply frame received for 5\nI0123 23:55:18.577209 1247 log.go:172] (0xc00051a0b0) Data frame received for 5\nI0123 23:55:18.577615 1247 log.go:172] (0xc00098bc20) (5) Data frame handling\nI0123 23:55:18.577715 1247 log.go:172] (0xc00098bc20) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0123 23:55:18.580701 1247 log.go:172] (0xc00051a0b0) Data frame received for 3\nI0123 23:55:18.580714 1247 log.go:172] (0xc000612460) (3) Data frame handling\nI0123 23:55:18.580722 1247 log.go:172] (0xc000612460) (3) Data frame sent\nI0123 23:55:18.580739 1247 log.go:172] (0xc00051a0b0) Data frame received for 5\nI0123 23:55:18.580764 1247 log.go:172] (0xc00098bc20) (5) Data frame handling\nI0123 23:55:18.580783 1247 log.go:172] (0xc00098bc20) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0123 23:55:18.581128 1247 log.go:172] (0xc00051a0b0) Data frame received for 5\nI0123 23:55:18.581150 1247 log.go:172] (0xc00098bc20) (5) Data frame handling\nI0123 23:55:18.581169 1247 log.go:172] (0xc00098bc20) (5) Data frame sent\n+ true\nI0123 23:55:18.746013 1247 log.go:172] (0xc00051a0b0) Data frame received for 1\nI0123 23:55:18.746560 1247 log.go:172] (0xc00094a640) (1) Data frame handling\nI0123 23:55:18.746588 1247 log.go:172] (0xc00094a640) (1) Data frame sent\nI0123 23:55:18.747207 1247 log.go:172] (0xc00051a0b0) (0xc00094a640) Stream removed, broadcasting: 1\nI0123 23:55:18.749074 1247 log.go:172] (0xc00051a0b0) (0xc000612460) Stream removed, broadcasting: 3\nI0123 23:55:18.749388 1247 log.go:172] (0xc00051a0b0) (0xc00098bc20) Stream removed, broadcasting: 5\nI0123 23:55:18.749451 1247 log.go:172] (0xc00051a0b0) (0xc00094a640) Stream removed, broadcasting: 1\nI0123 23:55:18.749522 1247 log.go:172] (0xc00051a0b0) (0xc000612460) Stream removed, broadcasting: 3\nI0123 23:55:18.749553 1247 log.go:172] (0xc00051a0b0) (0xc00098bc20) Stream removed, broadcasting: 5\nI0123 23:55:18.749644 1247 log.go:172] (0xc00051a0b0) Go away received\n" Jan 23 23:55:18.760: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 23 23:55:18.760: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 23 23:55:18.760: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5746 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 23 23:55:19.132: INFO: stderr: "I0123 23:55:18.923602 1268 log.go:172] (0xc0007caa50) (0xc0006e6000) Create stream\nI0123 23:55:18.923855 1268 log.go:172] (0xc0007caa50) (0xc0006e6000) Stream added, broadcasting: 1\nI0123 23:55:18.927107 1268 log.go:172] (0xc0007caa50) Reply frame received for 1\nI0123 23:55:18.927136 1268 log.go:172] (0xc0007caa50) (0xc000585d60) Create stream\nI0123 23:55:18.927144 1268 log.go:172] (0xc0007caa50) (0xc000585d60) Stream added, broadcasting: 3\nI0123 23:55:18.928872 1268 log.go:172] (0xc0007caa50) Reply frame received for 3\nI0123 23:55:18.928949 1268 log.go:172] (0xc0007caa50) (0xc0006e6140) Create stream\nI0123 23:55:18.928962 1268 log.go:172] (0xc0007caa50) (0xc0006e6140) Stream added, broadcasting: 5\nI0123 23:55:18.932577 1268 log.go:172] (0xc0007caa50) Reply frame received for 5\nI0123 23:55:19.004140 1268 log.go:172] (0xc0007caa50) Data frame received for 5\nI0123 23:55:19.004253 1268 log.go:172] (0xc0006e6140) (5) Data frame handling\nI0123 23:55:19.004285 1268 log.go:172] (0xc0006e6140) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0123 23:55:19.004340 1268 log.go:172] (0xc0007caa50) Data frame received for 3\nI0123 23:55:19.004362 1268 log.go:172] (0xc000585d60) (3) Data frame handling\nI0123 23:55:19.004386 1268 log.go:172] (0xc000585d60) (3) Data frame sent\nI0123 23:55:19.121151 1268 log.go:172] (0xc0007caa50) (0xc000585d60) Stream removed, broadcasting: 3\nI0123 23:55:19.121276 1268 log.go:172] (0xc0007caa50) Data frame received for 1\nI0123 23:55:19.121303 1268 log.go:172] (0xc0007caa50) (0xc0006e6140) Stream removed, broadcasting: 5\nI0123 23:55:19.121363 1268 log.go:172] (0xc0006e6000) (1) Data frame handling\nI0123 23:55:19.121387 1268 log.go:172] (0xc0006e6000) (1) Data frame sent\nI0123 23:55:19.121401 1268 log.go:172] (0xc0007caa50) (0xc0006e6000) Stream removed, broadcasting: 1\nI0123 23:55:19.121415 1268 log.go:172] (0xc0007caa50) Go away received\nI0123 23:55:19.121963 1268 log.go:172] (0xc0007caa50) (0xc0006e6000) Stream removed, broadcasting: 1\nI0123 23:55:19.121975 1268 log.go:172] (0xc0007caa50) (0xc000585d60) Stream removed, broadcasting: 3\nI0123 23:55:19.121981 1268 log.go:172] (0xc0007caa50) (0xc0006e6140) Stream removed, broadcasting: 5\n" Jan 23 23:55:19.132: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 23 23:55:19.132: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 23 23:55:19.137: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 23 23:55:19.137: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jan 23 23:55:19.137: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Jan 23 23:55:19.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5746 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 23 23:55:19.387: INFO: stderr: "I0123 23:55:19.260278 1289 log.go:172] (0xc00020efd0) (0xc00067bae0) Create stream\nI0123 23:55:19.260398 1289 log.go:172] (0xc00020efd0) (0xc00067bae0) Stream added, broadcasting: 1\nI0123 23:55:19.263456 1289 log.go:172] (0xc00020efd0) Reply frame received for 1\nI0123 23:55:19.263481 1289 log.go:172] (0xc00020efd0) (0xc00067bcc0) Create stream\nI0123 23:55:19.263486 1289 log.go:172] (0xc00020efd0) (0xc00067bcc0) Stream added, broadcasting: 3\nI0123 23:55:19.264956 1289 log.go:172] (0xc00020efd0) Reply frame received for 3\nI0123 23:55:19.265009 1289 log.go:172] (0xc00020efd0) (0xc000954000) Create stream\nI0123 23:55:19.265035 1289 log.go:172] (0xc00020efd0) (0xc000954000) Stream added, broadcasting: 5\nI0123 23:55:19.266695 1289 log.go:172] (0xc00020efd0) Reply frame received for 5\nI0123 23:55:19.321769 1289 log.go:172] (0xc00020efd0) Data frame received for 5\nI0123 23:55:19.321917 1289 log.go:172] (0xc000954000) (5) Data frame handling\nI0123 23:55:19.321939 1289 log.go:172] (0xc000954000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0123 23:55:19.322255 1289 log.go:172] (0xc00020efd0) Data frame received for 3\nI0123 23:55:19.322276 1289 log.go:172] (0xc00067bcc0) (3) Data frame handling\nI0123 23:55:19.322293 1289 log.go:172] (0xc00067bcc0) (3) Data frame sent\nI0123 23:55:19.381704 1289 log.go:172] (0xc00020efd0) (0xc00067bcc0) Stream removed, broadcasting: 3\nI0123 23:55:19.381788 1289 log.go:172] (0xc00020efd0) Data frame received for 1\nI0123 23:55:19.381813 1289 log.go:172] (0xc00067bae0) (1) Data frame handling\nI0123 23:55:19.381827 1289 log.go:172] (0xc00067bae0) (1) Data frame sent\nI0123 23:55:19.381868 1289 log.go:172] (0xc00020efd0) (0xc000954000) Stream removed, broadcasting: 5\nI0123 23:55:19.381912 1289 log.go:172] (0xc00020efd0) (0xc00067bae0) Stream removed, broadcasting: 1\nI0123 23:55:19.381937 1289 log.go:172] (0xc00020efd0) Go away received\nI0123 23:55:19.382765 1289 log.go:172] (0xc00020efd0) (0xc00067bae0) Stream removed, broadcasting: 1\nI0123 23:55:19.382782 1289 log.go:172] (0xc00020efd0) (0xc00067bcc0) Stream removed, broadcasting: 3\nI0123 23:55:19.382790 1289 log.go:172] (0xc00020efd0) (0xc000954000) Stream removed, broadcasting: 5\n" Jan 23 23:55:19.388: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 23 23:55:19.388: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 23 23:55:19.388: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5746 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 23 23:55:19.697: INFO: stderr: "I0123 23:55:19.538396 1309 log.go:172] (0xc0009cc6e0) (0xc0005dff40) Create stream\nI0123 23:55:19.538502 1309 log.go:172] (0xc0009cc6e0) (0xc0005dff40) Stream added, broadcasting: 1\nI0123 23:55:19.541512 1309 log.go:172] (0xc0009cc6e0) Reply frame received for 1\nI0123 23:55:19.541547 1309 log.go:172] (0xc0009cc6e0) (0xc0005a8820) Create stream\nI0123 23:55:19.541562 1309 log.go:172] (0xc0009cc6e0) (0xc0005a8820) Stream added, broadcasting: 3\nI0123 23:55:19.542495 1309 log.go:172] (0xc0009cc6e0) Reply frame received for 3\nI0123 23:55:19.542579 1309 log.go:172] (0xc0009cc6e0) (0xc0008e6000) Create stream\nI0123 23:55:19.542600 1309 log.go:172] (0xc0009cc6e0) (0xc0008e6000) Stream added, broadcasting: 5\nI0123 23:55:19.543431 1309 log.go:172] (0xc0009cc6e0) Reply frame received for 5\nI0123 23:55:19.603462 1309 log.go:172] (0xc0009cc6e0) Data frame received for 5\nI0123 23:55:19.603517 1309 log.go:172] (0xc0008e6000) (5) Data frame handling\nI0123 23:55:19.603529 1309 log.go:172] (0xc0008e6000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0123 23:55:19.627834 1309 log.go:172] (0xc0009cc6e0) Data frame received for 3\nI0123 23:55:19.627874 1309 log.go:172] (0xc0005a8820) (3) Data frame handling\nI0123 23:55:19.627896 1309 log.go:172] (0xc0005a8820) (3) Data frame sent\nI0123 23:55:19.691208 1309 log.go:172] (0xc0009cc6e0) Data frame received for 1\nI0123 23:55:19.691388 1309 log.go:172] (0xc0009cc6e0) (0xc0005a8820) Stream removed, broadcasting: 3\nI0123 23:55:19.691471 1309 log.go:172] (0xc0005dff40) (1) Data frame handling\nI0123 23:55:19.691495 1309 log.go:172] (0xc0005dff40) (1) Data frame sent\nI0123 23:55:19.691554 1309 log.go:172] (0xc0009cc6e0) (0xc0008e6000) Stream removed, broadcasting: 5\nI0123 23:55:19.691598 1309 log.go:172] (0xc0009cc6e0) (0xc0005dff40) Stream removed, broadcasting: 1\nI0123 23:55:19.691613 1309 log.go:172] (0xc0009cc6e0) Go away received\nI0123 23:55:19.692409 1309 log.go:172] (0xc0009cc6e0) (0xc0005dff40) Stream removed, broadcasting: 1\nI0123 23:55:19.692461 1309 log.go:172] (0xc0009cc6e0) (0xc0005a8820) Stream removed, broadcasting: 3\nI0123 23:55:19.692486 1309 log.go:172] (0xc0009cc6e0) (0xc0008e6000) Stream removed, broadcasting: 5\n" Jan 23 23:55:19.698: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 23 23:55:19.698: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 23 23:55:19.698: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5746 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 23 23:55:20.095: INFO: stderr: "I0123 23:55:19.875211 1331 log.go:172] (0xc00010b3f0) (0xc0008a6000) Create stream\nI0123 23:55:19.875287 1331 log.go:172] (0xc00010b3f0) (0xc0008a6000) Stream added, broadcasting: 1\nI0123 23:55:19.879069 1331 log.go:172] (0xc00010b3f0) Reply frame received for 1\nI0123 23:55:19.879209 1331 log.go:172] (0xc00010b3f0) (0xc000601c20) Create stream\nI0123 23:55:19.879234 1331 log.go:172] (0xc00010b3f0) (0xc000601c20) Stream added, broadcasting: 3\nI0123 23:55:19.880923 1331 log.go:172] (0xc00010b3f0) Reply frame received for 3\nI0123 23:55:19.880956 1331 log.go:172] (0xc00010b3f0) (0xc0008a60a0) Create stream\nI0123 23:55:19.880968 1331 log.go:172] (0xc00010b3f0) (0xc0008a60a0) Stream added, broadcasting: 5\nI0123 23:55:19.882878 1331 log.go:172] (0xc00010b3f0) Reply frame received for 5\nI0123 23:55:19.973669 1331 log.go:172] (0xc00010b3f0) Data frame received for 5\nI0123 23:55:19.973746 1331 log.go:172] (0xc0008a60a0) (5) Data frame handling\nI0123 23:55:19.973771 1331 log.go:172] (0xc0008a60a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0123 23:55:20.007950 1331 log.go:172] (0xc00010b3f0) Data frame received for 3\nI0123 23:55:20.008055 1331 log.go:172] (0xc000601c20) (3) Data frame handling\nI0123 23:55:20.008085 1331 log.go:172] (0xc000601c20) (3) Data frame sent\nI0123 23:55:20.084140 1331 log.go:172] (0xc00010b3f0) (0xc000601c20) Stream removed, broadcasting: 3\nI0123 23:55:20.084274 1331 log.go:172] (0xc00010b3f0) Data frame received for 1\nI0123 23:55:20.084305 1331 log.go:172] (0xc0008a6000) (1) Data frame handling\nI0123 23:55:20.084376 1331 log.go:172] (0xc0008a6000) (1) Data frame sent\nI0123 23:55:20.084404 1331 log.go:172] (0xc00010b3f0) (0xc0008a6000) Stream removed, broadcasting: 1\nI0123 23:55:20.084454 1331 log.go:172] (0xc00010b3f0) (0xc0008a60a0) Stream removed, broadcasting: 5\nI0123 23:55:20.085444 1331 log.go:172] (0xc00010b3f0) (0xc0008a6000) Stream removed, broadcasting: 1\nI0123 23:55:20.085472 1331 log.go:172] (0xc00010b3f0) (0xc000601c20) Stream removed, broadcasting: 3\nI0123 23:55:20.085485 1331 log.go:172] (0xc00010b3f0) (0xc0008a60a0) Stream removed, broadcasting: 5\n" Jan 23 23:55:20.095: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 23 23:55:20.095: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 23 23:55:20.095: INFO: Waiting for statefulset status.replicas updated to 0 Jan 23 23:55:20.121: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Jan 23 23:55:30.135: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 23 23:55:30.135: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jan 23 23:55:30.135: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jan 23 23:55:30.150: INFO: POD NODE PHASE GRACE CONDITIONS Jan 23 23:55:30.150: INFO: ss-0 jerma-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 23:54:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 23:55:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 23:55:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 23:54:46 +0000 UTC }] Jan 23 23:55:30.150: INFO: ss-1 jerma-server-mvvl6gufaqub Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 23:55:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 23:55:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 23:55:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 23:55:06 +0000 UTC }] Jan 23 23:55:30.150: INFO: ss-2 jerma-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 23:55:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 23:55:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 23:55:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 23:55:06 +0000 UTC }] Jan 23 23:55:30.151: INFO: Jan 23 23:55:30.151: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 23 23:55:31.748: INFO: POD NODE PHASE GRACE CONDITIONS Jan 23 23:55:31.748: INFO: ss-0 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 23:54:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 23:55:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 23:55:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 23:54:46 +0000 UTC }] Jan 23 23:55:31.749: INFO: ss-1 jerma-server-mvvl6gufaqub Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 23:55:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 23:55:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 23:55:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 23:55:06 +0000 UTC }] Jan 23 23:55:31.749: INFO: ss-2 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 23:55:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 23:55:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 23:55:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 23:55:06 +0000 UTC }] Jan 23 23:55:31.749: INFO: Jan 23 23:55:31.749: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 23 23:55:32.755: INFO: POD NODE PHASE GRACE CONDITIONS Jan 23 23:55:32.755: INFO: ss-0 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 23:54:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 23:55:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 23:55:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 23:54:46 +0000 UTC }] Jan 23 23:55:32.755: INFO: ss-1 jerma-server-mvvl6gufaqub Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 23:55:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 23:55:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 23:55:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 23:55:06 +0000 UTC }] Jan 23 23:55:32.755: INFO: ss-2 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 23:55:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 23:55:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 23:55:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 23:55:06 +0000 UTC }] Jan 23 23:55:32.755: INFO: Jan 23 23:55:32.755: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 23 23:55:33.772: INFO: POD NODE PHASE GRACE CONDITIONS Jan 23 23:55:33.772: INFO: ss-0 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 23:54:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 23:55:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 23:55:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 23:54:46 +0000 UTC }] Jan 23 23:55:33.772: INFO: ss-1 jerma-server-mvvl6gufaqub Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 23:55:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 23:55:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 23:55:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 23:55:06 +0000 UTC }] Jan 23 23:55:33.772: INFO: ss-2 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 23:55:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 23:55:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 23:55:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 23:55:06 +0000 UTC }] Jan 23 23:55:33.772: INFO: Jan 23 23:55:33.772: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 23 23:55:34.777: INFO: POD NODE PHASE GRACE CONDITIONS Jan 23 23:55:34.778: INFO: ss-0 jerma-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 23:54:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 23:55:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 23:55:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 23:54:46 +0000 UTC }] Jan 23 23:55:34.778: INFO: ss-1 jerma-server-mvvl6gufaqub Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 23:55:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 23:55:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 23:55:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 23:55:06 +0000 UTC }] Jan 23 23:55:34.778: INFO: ss-2 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 23:55:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 23:55:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 23:55:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 23:55:06 +0000 UTC }] Jan 23 23:55:34.778: INFO: Jan 23 23:55:34.778: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 23 23:55:35.793: INFO: POD NODE PHASE GRACE CONDITIONS Jan 23 23:55:35.793: INFO: ss-0 jerma-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 23:54:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 23:55:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 23:55:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 23:54:46 +0000 UTC }] Jan 23 23:55:35.793: INFO: ss-1 jerma-server-mvvl6gufaqub Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 23:55:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 23:55:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 23:55:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 23:55:06 +0000 UTC }] Jan 23 23:55:35.793: INFO: ss-2 jerma-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 23:55:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 23:55:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 23:55:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 23:55:06 +0000 UTC }] Jan 23 23:55:35.793: INFO: Jan 23 23:55:35.793: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 23 23:55:36.799: INFO: POD NODE PHASE GRACE CONDITIONS Jan 23 23:55:36.799: INFO: ss-0 jerma-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 23:54:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 23:55:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 23:55:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 23:54:46 +0000 UTC }] Jan 23 23:55:36.799: INFO: ss-2 jerma-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 23:55:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 23:55:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 23:55:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 23:55:06 +0000 UTC }] Jan 23 23:55:36.799: INFO: Jan 23 23:55:36.799: INFO: StatefulSet ss has not reached scale 0, at 2 Jan 23 23:55:37.814: INFO: POD NODE PHASE GRACE CONDITIONS Jan 23 23:55:37.814: INFO: ss-0 jerma-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 23:54:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 23:55:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 23:55:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 23:54:46 +0000 UTC }] Jan 23 23:55:37.814: INFO: ss-2 jerma-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 23:55:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 23:55:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 23:55:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 23:55:06 +0000 UTC }] Jan 23 23:55:37.814: INFO: Jan 23 23:55:37.814: INFO: StatefulSet ss has not reached scale 0, at 2 Jan 23 23:55:38.824: INFO: POD NODE PHASE GRACE CONDITIONS Jan 23 23:55:38.824: INFO: ss-0 jerma-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 23:54:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 23:55:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 23:55:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 23:54:46 +0000 UTC }] Jan 23 23:55:38.824: INFO: ss-2 jerma-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 23:55:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 23:55:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 23:55:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 23:55:06 +0000 UTC }] Jan 23 23:55:38.824: INFO: Jan 23 23:55:38.824: INFO: StatefulSet ss has not reached scale 0, at 2 Jan 23 23:55:39.832: INFO: POD NODE PHASE GRACE CONDITIONS Jan 23 23:55:39.832: INFO: ss-0 jerma-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 23:54:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 23:55:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 23:55:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 23:54:46 +0000 UTC }] Jan 23 23:55:39.832: INFO: ss-2 jerma-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 23:55:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 23:55:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-23 23:55:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-23 23:55:06 +0000 UTC }] Jan 23 23:55:39.832: INFO: Jan 23 23:55:39.832: INFO: StatefulSet ss has not reached scale 0, at 2 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-5746 Jan 23 23:55:40.838: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5746 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 23 23:55:41.035: INFO: rc: 1 Jan 23 23:55:41.035: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5746 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Jan 23 23:55:51.036: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5746 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 23 23:55:51.202: INFO: rc: 1 Jan 23 23:55:51.202: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5746 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 23 23:56:01.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5746 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 23 23:56:01.375: INFO: rc: 1 Jan 23 23:56:01.375: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5746 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 23 23:56:11.375: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5746 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 23 23:56:11.546: INFO: rc: 1 Jan 23 23:56:11.546: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5746 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 23 23:56:21.547: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5746 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 23 23:56:21.730: INFO: rc: 1 Jan 23 23:56:21.731: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5746 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 23 23:56:31.732: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5746 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 23 23:56:31.942: INFO: rc: 1 Jan 23 23:56:31.942: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5746 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 23 23:56:41.943: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5746 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 23 23:56:42.162: INFO: rc: 1 Jan 23 23:56:42.162: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5746 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 23 23:56:52.163: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5746 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 23 23:56:52.366: INFO: rc: 1 Jan 23 23:56:52.366: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5746 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 23 23:57:02.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5746 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 23 23:57:02.523: INFO: rc: 1 Jan 23 23:57:02.523: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5746 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 23 23:57:12.524: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5746 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 23 23:57:12.646: INFO: rc: 1 Jan 23 23:57:12.646: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5746 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 23 23:57:22.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5746 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 23 23:57:22.823: INFO: rc: 1 Jan 23 23:57:22.823: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5746 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 23 23:57:32.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5746 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 23 23:57:32.955: INFO: rc: 1 Jan 23 23:57:32.955: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5746 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 23 23:57:42.955: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5746 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 23 23:57:43.133: INFO: rc: 1 Jan 23 23:57:43.133: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5746 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 23 23:57:53.134: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5746 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 23 23:57:53.278: INFO: rc: 1 Jan 23 23:57:53.278: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5746 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 23 23:58:03.278: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5746 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 23 23:58:03.463: INFO: rc: 1 Jan 23 23:58:03.463: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5746 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 23 23:58:13.464: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5746 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 23 23:58:13.627: INFO: rc: 1 Jan 23 23:58:13.628: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5746 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 23 23:58:23.628: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5746 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 23 23:58:23.785: INFO: rc: 1 Jan 23 23:58:23.785: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5746 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 23 23:58:33.785: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5746 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 23 23:58:34.030: INFO: rc: 1 Jan 23 23:58:34.030: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5746 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 23 23:58:44.031: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5746 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 23 23:58:44.144: INFO: rc: 1 Jan 23 23:58:44.144: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5746 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 23 23:58:54.144: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5746 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 23 23:58:54.261: INFO: rc: 1 Jan 23 23:58:54.261: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5746 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 23 23:59:04.262: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5746 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 23 23:59:04.372: INFO: rc: 1 Jan 23 23:59:04.372: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5746 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 23 23:59:14.372: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5746 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 23 23:59:14.568: INFO: rc: 1 Jan 23 23:59:14.568: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5746 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 23 23:59:24.569: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5746 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 23 23:59:24.709: INFO: rc: 1 Jan 23 23:59:24.710: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5746 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 23 23:59:34.711: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5746 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 23 23:59:34.831: INFO: rc: 1 Jan 23 23:59:34.831: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5746 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 23 23:59:44.832: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5746 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 23 23:59:45.103: INFO: rc: 1 Jan 23 23:59:45.103: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5746 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 23 23:59:55.103: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5746 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 23 23:59:55.273: INFO: rc: 1 Jan 23 23:59:55.274: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5746 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 24 00:00:05.274: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5746 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 24 00:00:05.379: INFO: rc: 1 Jan 24 00:00:05.379: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5746 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 24 00:00:15.379: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5746 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 24 00:00:15.516: INFO: rc: 1 Jan 24 00:00:15.516: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5746 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 24 00:00:25.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5746 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 24 00:00:25.664: INFO: rc: 1 Jan 24 00:00:25.664: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5746 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 24 00:00:35.664: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5746 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 24 00:00:35.856: INFO: rc: 1 Jan 24 00:00:35.857: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5746 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 24 00:00:45.857: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5746 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 24 00:00:46.017: INFO: rc: 1 Jan 24 00:00:46.017: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: Jan 24 00:00:46.017: INFO: Scaling statefulset ss to 0 Jan 24 00:00:46.025: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Jan 24 00:00:46.029: INFO: Deleting all statefulset in ns statefulset-5746 Jan 24 00:00:46.031: INFO: Scaling statefulset ss to 0 Jan 24 00:00:46.038: INFO: Waiting for statefulset status.replicas updated to 0 Jan 24 00:00:46.040: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:00:46.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5746" for this suite. • [SLOW TEST:360.258 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":278,"completed":59,"skipped":940,"failed":0} S ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:00:46.077: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-6881.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-6881.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6881.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-6881.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-6881.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6881.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 24 00:00:56.291: INFO: DNS probes using dns-6881/dns-test-6ce24124-e3a3-45a4-b0e3-f678d221a5f9 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:00:56.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6881" for this suite. • [SLOW TEST:10.442 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":278,"completed":60,"skipped":941,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:00:56.521: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-3896 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a new StatefulSet Jan 24 00:00:56.706: INFO: Found 0 stateful pods, waiting for 3 Jan 24 00:01:06.714: INFO: Found 2 stateful pods, waiting for 3 Jan 24 00:01:16.710: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 24 00:01:16.711: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 24 00:01:16.711: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jan 24 00:01:26.715: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 24 00:01:26.716: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 24 00:01:26.716: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Jan 24 00:01:26.748: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Jan 24 00:01:36.926: INFO: Updating stateful set ss2 Jan 24 00:01:36.952: INFO: Waiting for Pod statefulset-3896/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 24 00:01:46.965: INFO: Waiting for Pod statefulset-3896/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted Jan 24 00:01:57.269: INFO: Found 2 stateful pods, waiting for 3 Jan 24 00:02:07.274: INFO: Found 2 stateful pods, waiting for 3 Jan 24 00:02:17.329: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 24 00:02:17.330: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 24 00:02:17.330: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Jan 24 00:02:17.353: INFO: Updating stateful set ss2 Jan 24 00:02:17.550: INFO: Waiting for Pod statefulset-3896/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 24 00:02:27.563: INFO: Waiting for Pod statefulset-3896/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 24 00:02:38.236: INFO: Updating stateful set ss2 Jan 24 00:02:38.276: INFO: Waiting for StatefulSet statefulset-3896/ss2 to complete update Jan 24 00:02:38.276: INFO: Waiting for Pod statefulset-3896/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 24 00:02:48.296: INFO: Waiting for StatefulSet statefulset-3896/ss2 to complete update Jan 24 00:02:48.296: INFO: Waiting for Pod statefulset-3896/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 24 00:02:58.286: INFO: Waiting for StatefulSet statefulset-3896/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Jan 24 00:03:08.292: INFO: Deleting all statefulset in ns statefulset-3896 Jan 24 00:03:08.296: INFO: Scaling statefulset ss2 to 0 Jan 24 00:03:48.325: INFO: Waiting for statefulset status.replicas updated to 0 Jan 24 00:03:48.333: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:03:48.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3896" for this suite. • [SLOW TEST:171.854 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":278,"completed":61,"skipped":978,"failed":0} SSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:03:48.377: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:279 [BeforeEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1597 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: running the image docker.io/library/httpd:2.4.38-alpine Jan 24 00:03:48.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-5019' Jan 24 00:03:50.893: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 24 00:03:50.893: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the pod controlled by e2e-test-httpd-deployment gets created [AfterEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1603 Jan 24 00:03:52.983: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-5019' Jan 24 00:03:53.185: INFO: stderr: "" Jan 24 00:03:53.185: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:03:53.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5019" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance]","total":278,"completed":62,"skipped":987,"failed":0} SSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:03:53.332: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating secret with name secret-test-b7a5d922-5b61-4973-b36b-8590d15270b0 STEP: Creating a pod to test consume secrets Jan 24 00:03:53.688: INFO: Waiting up to 5m0s for pod "pod-secrets-6418fcec-5786-4bb4-9fad-d3c577a97f61" in namespace "secrets-6624" to be "success or failure" Jan 24 00:03:53.729: INFO: Pod "pod-secrets-6418fcec-5786-4bb4-9fad-d3c577a97f61": Phase="Pending", Reason="", readiness=false. Elapsed: 40.269651ms Jan 24 00:03:55.734: INFO: Pod "pod-secrets-6418fcec-5786-4bb4-9fad-d3c577a97f61": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045474217s Jan 24 00:03:57.739: INFO: Pod "pod-secrets-6418fcec-5786-4bb4-9fad-d3c577a97f61": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050535474s Jan 24 00:03:59.744: INFO: Pod "pod-secrets-6418fcec-5786-4bb4-9fad-d3c577a97f61": Phase="Pending", Reason="", readiness=false. Elapsed: 6.055241296s Jan 24 00:04:01.751: INFO: Pod "pod-secrets-6418fcec-5786-4bb4-9fad-d3c577a97f61": Phase="Pending", Reason="", readiness=false. Elapsed: 8.062426894s Jan 24 00:04:03.760: INFO: Pod "pod-secrets-6418fcec-5786-4bb4-9fad-d3c577a97f61": Phase="Pending", Reason="", readiness=false. Elapsed: 10.071722551s Jan 24 00:04:05.768: INFO: Pod "pod-secrets-6418fcec-5786-4bb4-9fad-d3c577a97f61": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.079475574s STEP: Saw pod success Jan 24 00:04:05.768: INFO: Pod "pod-secrets-6418fcec-5786-4bb4-9fad-d3c577a97f61" satisfied condition "success or failure" Jan 24 00:04:05.773: INFO: Trying to get logs from node jerma-node pod pod-secrets-6418fcec-5786-4bb4-9fad-d3c577a97f61 container secret-volume-test: STEP: delete the pod Jan 24 00:04:05.900: INFO: Waiting for pod pod-secrets-6418fcec-5786-4bb4-9fad-d3c577a97f61 to disappear Jan 24 00:04:05.907: INFO: Pod pod-secrets-6418fcec-5786-4bb4-9fad-d3c577a97f61 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:04:05.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6624" for this suite. • [SLOW TEST:12.586 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":63,"skipped":990,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:04:05.920: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 24 00:04:06.350: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 24 00:04:08.375: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715421046, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715421046, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715421046, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715421046, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 24 00:04:10.381: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715421046, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715421046, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715421046, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715421046, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 24 00:04:12.380: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715421046, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715421046, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715421046, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715421046, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 24 00:04:15.445: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Jan 24 00:04:15.454: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:04:16.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4525" for this suite. STEP: Destroying namespace "webhook-4525-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:10.879 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":278,"completed":64,"skipped":1028,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:04:16.799: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:04:23.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-4245" for this suite. STEP: Destroying namespace "nsdeletetest-970" for this suite. Jan 24 00:04:23.328: INFO: Namespace nsdeletetest-970 was already deleted STEP: Destroying namespace "nsdeletetest-795" for this suite. • [SLOW TEST:6.536 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":278,"completed":65,"skipped":1039,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:04:23.335: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Jan 24 00:04:23.522: INFO: Waiting up to 5m0s for pod "downwardapi-volume-277c774a-57db-4476-8387-d1cddfba4bf6" in namespace "projected-2134" to be "success or failure" Jan 24 00:04:23.547: INFO: Pod "downwardapi-volume-277c774a-57db-4476-8387-d1cddfba4bf6": Phase="Pending", Reason="", readiness=false. Elapsed: 24.833734ms Jan 24 00:04:25.559: INFO: Pod "downwardapi-volume-277c774a-57db-4476-8387-d1cddfba4bf6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036828822s Jan 24 00:04:27.566: INFO: Pod "downwardapi-volume-277c774a-57db-4476-8387-d1cddfba4bf6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043251787s Jan 24 00:04:29.573: INFO: Pod "downwardapi-volume-277c774a-57db-4476-8387-d1cddfba4bf6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.050662446s Jan 24 00:04:31.579: INFO: Pod "downwardapi-volume-277c774a-57db-4476-8387-d1cddfba4bf6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.056833193s STEP: Saw pod success Jan 24 00:04:31.579: INFO: Pod "downwardapi-volume-277c774a-57db-4476-8387-d1cddfba4bf6" satisfied condition "success or failure" Jan 24 00:04:31.584: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-277c774a-57db-4476-8387-d1cddfba4bf6 container client-container: STEP: delete the pod Jan 24 00:04:31.626: INFO: Waiting for pod downwardapi-volume-277c774a-57db-4476-8387-d1cddfba4bf6 to disappear Jan 24 00:04:31.636: INFO: Pod downwardapi-volume-277c774a-57db-4476-8387-d1cddfba4bf6 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:04:31.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2134" for this suite. • [SLOW TEST:8.317 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":66,"skipped":1052,"failed":0} SSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:04:31.653: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating secret with name secret-test-3b1d47de-a4f8-4b90-b584-00337bdfaf3c STEP: Creating a pod to test consume secrets Jan 24 00:04:32.551: INFO: Waiting up to 5m0s for pod "pod-secrets-67f14733-95d1-4a29-bde3-acb6c49bfdb5" in namespace "secrets-379" to be "success or failure" Jan 24 00:04:32.570: INFO: Pod "pod-secrets-67f14733-95d1-4a29-bde3-acb6c49bfdb5": Phase="Pending", Reason="", readiness=false. Elapsed: 18.362682ms Jan 24 00:04:34.577: INFO: Pod "pod-secrets-67f14733-95d1-4a29-bde3-acb6c49bfdb5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025854586s Jan 24 00:04:36.587: INFO: Pod "pod-secrets-67f14733-95d1-4a29-bde3-acb6c49bfdb5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035567891s Jan 24 00:04:38.598: INFO: Pod "pod-secrets-67f14733-95d1-4a29-bde3-acb6c49bfdb5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046454104s Jan 24 00:04:40.604: INFO: Pod "pod-secrets-67f14733-95d1-4a29-bde3-acb6c49bfdb5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.052835869s STEP: Saw pod success Jan 24 00:04:40.604: INFO: Pod "pod-secrets-67f14733-95d1-4a29-bde3-acb6c49bfdb5" satisfied condition "success or failure" Jan 24 00:04:40.609: INFO: Trying to get logs from node jerma-node pod pod-secrets-67f14733-95d1-4a29-bde3-acb6c49bfdb5 container secret-volume-test: STEP: delete the pod Jan 24 00:04:41.077: INFO: Waiting for pod pod-secrets-67f14733-95d1-4a29-bde3-acb6c49bfdb5 to disappear Jan 24 00:04:41.082: INFO: Pod pod-secrets-67f14733-95d1-4a29-bde3-acb6c49bfdb5 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:04:41.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-379" for this suite. • [SLOW TEST:9.448 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":67,"skipped":1055,"failed":0} [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:04:41.101: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Performing setup for networking test in namespace pod-network-test-6039 STEP: creating a selector STEP: Creating the service pods in kubernetes Jan 24 00:04:41.213: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jan 24 00:05:17.555: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.44.0.1:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6039 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 24 00:05:17.555: INFO: >>> kubeConfig: /root/.kube/config I0124 00:05:17.618072 8 log.go:172] (0xc002d24e70) (0xc001ed5900) Create stream I0124 00:05:17.618127 8 log.go:172] (0xc002d24e70) (0xc001ed5900) Stream added, broadcasting: 1 I0124 00:05:17.623356 8 log.go:172] (0xc002d24e70) Reply frame received for 1 I0124 00:05:17.623423 8 log.go:172] (0xc002d24e70) (0xc001ed59a0) Create stream I0124 00:05:17.623437 8 log.go:172] (0xc002d24e70) (0xc001ed59a0) Stream added, broadcasting: 3 I0124 00:05:17.627650 8 log.go:172] (0xc002d24e70) Reply frame received for 3 I0124 00:05:17.627714 8 log.go:172] (0xc002d24e70) (0xc001ed5a40) Create stream I0124 00:05:17.627739 8 log.go:172] (0xc002d24e70) (0xc001ed5a40) Stream added, broadcasting: 5 I0124 00:05:17.630774 8 log.go:172] (0xc002d24e70) Reply frame received for 5 I0124 00:05:17.716038 8 log.go:172] (0xc002d24e70) Data frame received for 3 I0124 00:05:17.716130 8 log.go:172] (0xc001ed59a0) (3) Data frame handling I0124 00:05:17.716184 8 log.go:172] (0xc001ed59a0) (3) Data frame sent I0124 00:05:17.805130 8 log.go:172] (0xc002d24e70) (0xc001ed59a0) Stream removed, broadcasting: 3 I0124 00:05:17.805506 8 log.go:172] (0xc002d24e70) Data frame received for 1 I0124 00:05:17.805525 8 log.go:172] (0xc001ed5900) (1) Data frame handling I0124 00:05:17.805561 8 log.go:172] (0xc001ed5900) (1) Data frame sent I0124 00:05:17.805600 8 log.go:172] (0xc002d24e70) (0xc001ed5900) Stream removed, broadcasting: 1 I0124 00:05:17.805734 8 log.go:172] (0xc002d24e70) (0xc001ed5a40) Stream removed, broadcasting: 5 I0124 00:05:17.805769 8 log.go:172] (0xc002d24e70) (0xc001ed5900) Stream removed, broadcasting: 1 I0124 00:05:17.805783 8 log.go:172] (0xc002d24e70) (0xc001ed59a0) Stream removed, broadcasting: 3 I0124 00:05:17.805796 8 log.go:172] (0xc002d24e70) (0xc001ed5a40) Stream removed, broadcasting: 5 Jan 24 00:05:17.806: INFO: Found all expected endpoints: [netserver-0] I0124 00:05:17.806732 8 log.go:172] (0xc002d24e70) Go away received Jan 24 00:05:17.815: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6039 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 24 00:05:17.815: INFO: >>> kubeConfig: /root/.kube/config I0124 00:05:17.874106 8 log.go:172] (0xc00260fef0) (0xc0011db720) Create stream I0124 00:05:17.874418 8 log.go:172] (0xc00260fef0) (0xc0011db720) Stream added, broadcasting: 1 I0124 00:05:17.879610 8 log.go:172] (0xc00260fef0) Reply frame received for 1 I0124 00:05:17.879685 8 log.go:172] (0xc00260fef0) (0xc001ed5c20) Create stream I0124 00:05:17.879704 8 log.go:172] (0xc00260fef0) (0xc001ed5c20) Stream added, broadcasting: 3 I0124 00:05:17.882251 8 log.go:172] (0xc00260fef0) Reply frame received for 3 I0124 00:05:17.882329 8 log.go:172] (0xc00260fef0) (0xc001aee0a0) Create stream I0124 00:05:17.882350 8 log.go:172] (0xc00260fef0) (0xc001aee0a0) Stream added, broadcasting: 5 I0124 00:05:17.885214 8 log.go:172] (0xc00260fef0) Reply frame received for 5 I0124 00:05:18.017019 8 log.go:172] (0xc00260fef0) Data frame received for 3 I0124 00:05:18.017074 8 log.go:172] (0xc001ed5c20) (3) Data frame handling I0124 00:05:18.017091 8 log.go:172] (0xc001ed5c20) (3) Data frame sent I0124 00:05:18.099058 8 log.go:172] (0xc00260fef0) (0xc001ed5c20) Stream removed, broadcasting: 3 I0124 00:05:18.099222 8 log.go:172] (0xc00260fef0) Data frame received for 1 I0124 00:05:18.099232 8 log.go:172] (0xc0011db720) (1) Data frame handling I0124 00:05:18.099245 8 log.go:172] (0xc0011db720) (1) Data frame sent I0124 00:05:18.099268 8 log.go:172] (0xc00260fef0) (0xc0011db720) Stream removed, broadcasting: 1 I0124 00:05:18.099401 8 log.go:172] (0xc00260fef0) (0xc001aee0a0) Stream removed, broadcasting: 5 I0124 00:05:18.099429 8 log.go:172] (0xc00260fef0) (0xc0011db720) Stream removed, broadcasting: 1 I0124 00:05:18.099438 8 log.go:172] (0xc00260fef0) (0xc001ed5c20) Stream removed, broadcasting: 3 I0124 00:05:18.099447 8 log.go:172] (0xc00260fef0) (0xc001aee0a0) Stream removed, broadcasting: 5 Jan 24 00:05:18.099: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:05:18.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready I0124 00:05:18.100105 8 log.go:172] (0xc00260fef0) Go away received STEP: Destroying namespace "pod-network-test-6039" for this suite. • [SLOW TEST:37.013 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":68,"skipped":1055,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:05:18.114: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Jan 24 00:05:18.307: INFO: Waiting up to 5m0s for pod "downwardapi-volume-10bf3826-8772-4633-a770-0ca1017c376b" in namespace "downward-api-3297" to be "success or failure" Jan 24 00:05:18.332: INFO: Pod "downwardapi-volume-10bf3826-8772-4633-a770-0ca1017c376b": Phase="Pending", Reason="", readiness=false. Elapsed: 24.80205ms Jan 24 00:05:20.340: INFO: Pod "downwardapi-volume-10bf3826-8772-4633-a770-0ca1017c376b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033130338s Jan 24 00:05:22.345: INFO: Pod "downwardapi-volume-10bf3826-8772-4633-a770-0ca1017c376b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037689416s Jan 24 00:05:24.349: INFO: Pod "downwardapi-volume-10bf3826-8772-4633-a770-0ca1017c376b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.042228093s Jan 24 00:05:26.355: INFO: Pod "downwardapi-volume-10bf3826-8772-4633-a770-0ca1017c376b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.047886366s Jan 24 00:05:28.369: INFO: Pod "downwardapi-volume-10bf3826-8772-4633-a770-0ca1017c376b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.062263685s Jan 24 00:05:30.376: INFO: Pod "downwardapi-volume-10bf3826-8772-4633-a770-0ca1017c376b": Phase="Pending", Reason="", readiness=false. Elapsed: 12.069356665s Jan 24 00:05:32.381: INFO: Pod "downwardapi-volume-10bf3826-8772-4633-a770-0ca1017c376b": Phase="Pending", Reason="", readiness=false. Elapsed: 14.074292968s Jan 24 00:05:34.392: INFO: Pod "downwardapi-volume-10bf3826-8772-4633-a770-0ca1017c376b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.08528511s STEP: Saw pod success Jan 24 00:05:34.392: INFO: Pod "downwardapi-volume-10bf3826-8772-4633-a770-0ca1017c376b" satisfied condition "success or failure" Jan 24 00:05:34.394: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-10bf3826-8772-4633-a770-0ca1017c376b container client-container: STEP: delete the pod Jan 24 00:05:34.452: INFO: Waiting for pod downwardapi-volume-10bf3826-8772-4633-a770-0ca1017c376b to disappear Jan 24 00:05:34.463: INFO: Pod downwardapi-volume-10bf3826-8772-4633-a770-0ca1017c376b no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:05:34.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3297" for this suite. • [SLOW TEST:16.359 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":69,"skipped":1074,"failed":0} SS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:05:34.474: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0124 00:06:15.342720 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 24 00:06:15.342: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:06:15.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8088" for this suite. • [SLOW TEST:40.884 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":278,"completed":70,"skipped":1076,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:06:15.359: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating secret with name secret-test-93a5b22a-595d-4dbc-a5e0-5aa6dab4d07d STEP: Creating a pod to test consume secrets Jan 24 00:06:15.486: INFO: Waiting up to 5m0s for pod "pod-secrets-24527012-3270-4754-8634-8b347b18c794" in namespace "secrets-8035" to be "success or failure" Jan 24 00:06:15.499: INFO: Pod "pod-secrets-24527012-3270-4754-8634-8b347b18c794": Phase="Pending", Reason="", readiness=false. Elapsed: 12.344135ms Jan 24 00:06:17.505: INFO: Pod "pod-secrets-24527012-3270-4754-8634-8b347b18c794": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018359249s Jan 24 00:06:19.510: INFO: Pod "pod-secrets-24527012-3270-4754-8634-8b347b18c794": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023858362s Jan 24 00:06:21.518: INFO: Pod "pod-secrets-24527012-3270-4754-8634-8b347b18c794": Phase="Pending", Reason="", readiness=false. Elapsed: 6.031646594s Jan 24 00:06:23.714: INFO: Pod "pod-secrets-24527012-3270-4754-8634-8b347b18c794": Phase="Pending", Reason="", readiness=false. Elapsed: 8.227441423s Jan 24 00:06:26.026: INFO: Pod "pod-secrets-24527012-3270-4754-8634-8b347b18c794": Phase="Pending", Reason="", readiness=false. Elapsed: 10.53926976s Jan 24 00:06:28.500: INFO: Pod "pod-secrets-24527012-3270-4754-8634-8b347b18c794": Phase="Pending", Reason="", readiness=false. Elapsed: 13.013629147s Jan 24 00:06:31.380: INFO: Pod "pod-secrets-24527012-3270-4754-8634-8b347b18c794": Phase="Pending", Reason="", readiness=false. Elapsed: 15.893325116s Jan 24 00:06:33.549: INFO: Pod "pod-secrets-24527012-3270-4754-8634-8b347b18c794": Phase="Pending", Reason="", readiness=false. Elapsed: 18.062936205s Jan 24 00:06:36.085: INFO: Pod "pod-secrets-24527012-3270-4754-8634-8b347b18c794": Phase="Succeeded", Reason="", readiness=false. Elapsed: 20.598244193s STEP: Saw pod success Jan 24 00:06:36.085: INFO: Pod "pod-secrets-24527012-3270-4754-8634-8b347b18c794" satisfied condition "success or failure" Jan 24 00:06:36.089: INFO: Trying to get logs from node jerma-node pod pod-secrets-24527012-3270-4754-8634-8b347b18c794 container secret-env-test: STEP: delete the pod Jan 24 00:06:36.390: INFO: Waiting for pod pod-secrets-24527012-3270-4754-8634-8b347b18c794 to disappear Jan 24 00:06:36.427: INFO: Pod pod-secrets-24527012-3270-4754-8634-8b347b18c794 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:06:36.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8035" for this suite. • [SLOW TEST:21.094 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":278,"completed":71,"skipped":1097,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:06:36.454: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:06:48.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7814" for this suite. • [SLOW TEST:11.566 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":278,"completed":72,"skipped":1108,"failed":0} SSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:06:48.020: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward api env vars Jan 24 00:06:48.236: INFO: Waiting up to 5m0s for pod "downward-api-2ee53545-68fe-4d91-93d7-64e3f0745562" in namespace "downward-api-3686" to be "success or failure" Jan 24 00:06:48.249: INFO: Pod "downward-api-2ee53545-68fe-4d91-93d7-64e3f0745562": Phase="Pending", Reason="", readiness=false. Elapsed: 13.078496ms Jan 24 00:06:50.259: INFO: Pod "downward-api-2ee53545-68fe-4d91-93d7-64e3f0745562": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023277255s Jan 24 00:06:52.270: INFO: Pod "downward-api-2ee53545-68fe-4d91-93d7-64e3f0745562": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034087608s Jan 24 00:06:54.277: INFO: Pod "downward-api-2ee53545-68fe-4d91-93d7-64e3f0745562": Phase="Pending", Reason="", readiness=false. Elapsed: 6.040976229s Jan 24 00:06:56.285: INFO: Pod "downward-api-2ee53545-68fe-4d91-93d7-64e3f0745562": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.048953551s STEP: Saw pod success Jan 24 00:06:56.285: INFO: Pod "downward-api-2ee53545-68fe-4d91-93d7-64e3f0745562" satisfied condition "success or failure" Jan 24 00:06:56.290: INFO: Trying to get logs from node jerma-node pod downward-api-2ee53545-68fe-4d91-93d7-64e3f0745562 container dapi-container: STEP: delete the pod Jan 24 00:06:56.634: INFO: Waiting for pod downward-api-2ee53545-68fe-4d91-93d7-64e3f0745562 to disappear Jan 24 00:06:56.645: INFO: Pod downward-api-2ee53545-68fe-4d91-93d7-64e3f0745562 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:06:56.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3686" for this suite. • [SLOW TEST:8.640 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":278,"completed":73,"skipped":1119,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:06:56.662: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating projection with secret that has name secret-emptykey-test-1839df89-2001-43a0-9e80-7a2dc025673f [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:06:56.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4173" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":278,"completed":74,"skipped":1156,"failed":0} ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:06:56.852: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Jan 24 00:07:13.138: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 24 00:07:13.146: INFO: Pod pod-with-prestop-exec-hook still exists Jan 24 00:07:15.146: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 24 00:07:15.151: INFO: Pod pod-with-prestop-exec-hook still exists Jan 24 00:07:17.146: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 24 00:07:17.153: INFO: Pod pod-with-prestop-exec-hook still exists Jan 24 00:07:19.146: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 24 00:07:19.174: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:07:19.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-6749" for this suite. • [SLOW TEST:22.361 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":278,"completed":75,"skipped":1156,"failed":0} SSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:07:19.213: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:73 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Jan 24 00:07:19.332: INFO: Creating deployment "webserver-deployment" Jan 24 00:07:19.340: INFO: Waiting for observed generation 1 Jan 24 00:07:21.600: INFO: Waiting for all required pods to come up Jan 24 00:07:21.612: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Jan 24 00:07:48.166: INFO: Waiting for deployment "webserver-deployment" to complete Jan 24 00:07:48.186: INFO: Updating deployment "webserver-deployment" with a non-existent image Jan 24 00:07:48.195: INFO: Updating deployment webserver-deployment Jan 24 00:07:48.195: INFO: Waiting for observed generation 2 Jan 24 00:07:51.450: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Jan 24 00:07:51.467: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Jan 24 00:07:51.654: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Jan 24 00:07:53.006: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Jan 24 00:07:53.006: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Jan 24 00:07:53.011: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Jan 24 00:07:53.031: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Jan 24 00:07:53.031: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Jan 24 00:07:53.051: INFO: Updating deployment webserver-deployment Jan 24 00:07:53.052: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Jan 24 00:07:54.506: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Jan 24 00:07:58.667: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:67 Jan 24 00:08:00.001: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-3969 /apis/apps/v1/namespaces/deployment-3969/deployments/webserver-deployment 15ff5811-d752-4769-a085-c3fbd4625416 3909781 3 2020-01-24 00:07:19 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0035780a8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-01-24 00:07:54 +0000 UTC,LastTransitionTime:2020-01-24 00:07:54 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-01-24 00:07:58 +0000 UTC,LastTransitionTime:2020-01-24 00:07:19 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Jan 24 00:08:03.281: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8 deployment-3969 /apis/apps/v1/namespaces/deployment-3969/replicasets/webserver-deployment-c7997dcc8 5311b13c-32e8-4b5e-9ea8-b0c98d5cf168 3909771 3 2020-01-24 00:07:48 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 15ff5811-d752-4769-a085-c3fbd4625416 0xc0031ce297 0xc0031ce298}] [] []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0031ce308 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 24 00:08:03.281: INFO: All old ReplicaSets of Deployment "webserver-deployment": Jan 24 00:08:03.282: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587 deployment-3969 /apis/apps/v1/namespaces/deployment-3969/replicasets/webserver-deployment-595b5b9587 93c3fd44-37e9-40da-a4bb-4f3a7d24040b 3909770 3 2020-01-24 00:07:19 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 15ff5811-d752-4769-a085-c3fbd4625416 0xc0031ce1d7 0xc0031ce1d8}] [] []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0031ce238 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Jan 24 00:08:05.737: INFO: Pod "webserver-deployment-595b5b9587-2226f" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-2226f webserver-deployment-595b5b9587- deployment-3969 /api/v1/namespaces/deployment-3969/pods/webserver-deployment-595b5b9587-2226f 069e303d-b341-420f-9b50-a1d87e062f27 3909761 0 2020-01-24 00:07:56 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 93c3fd44-37e9-40da-a4bb-4f3a7d24040b 0xc0035784c7 0xc0035784c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-s2nf2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-s2nf2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-s2nf2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-24 00:07:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 24 00:08:05.737: INFO: Pod "webserver-deployment-595b5b9587-2jbqt" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-2jbqt webserver-deployment-595b5b9587- deployment-3969 /api/v1/namespaces/deployment-3969/pods/webserver-deployment-595b5b9587-2jbqt ef6e82b2-c7b2-4440-b5d0-a7c35e13ca5d 3909743 0 2020-01-24 00:07:56 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 93c3fd44-37e9-40da-a4bb-4f3a7d24040b 0xc0035785d7 0xc0035785d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-s2nf2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-s2nf2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-s2nf2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-24 00:07:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 24 00:08:05.737: INFO: Pod "webserver-deployment-595b5b9587-4cjjf" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-4cjjf webserver-deployment-595b5b9587- deployment-3969 /api/v1/namespaces/deployment-3969/pods/webserver-deployment-595b5b9587-4cjjf 74dd81e7-bb3b-4af2-88a7-4e50e4242a89 3909763 0 2020-01-24 00:07:56 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 93c3fd44-37e9-40da-a4bb-4f3a7d24040b 0xc0035786f7 0xc0035786f8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-s2nf2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-s2nf2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-s2nf2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-24 00:07:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 24 00:08:05.738: INFO: Pod "webserver-deployment-595b5b9587-4dvd9" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-4dvd9 webserver-deployment-595b5b9587- deployment-3969 /api/v1/namespaces/deployment-3969/pods/webserver-deployment-595b5b9587-4dvd9 1e611bc2-bc81-44bc-bbc9-d5bc27e6cab1 3909748 0 2020-01-24 00:07:56 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 93c3fd44-37e9-40da-a4bb-4f3a7d24040b 0xc003578817 0xc003578818}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-s2nf2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-s2nf2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-s2nf2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-24 00:07:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 24 00:08:05.738: INFO: Pod "webserver-deployment-595b5b9587-5v9ft" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-5v9ft webserver-deployment-595b5b9587- deployment-3969 /api/v1/namespaces/deployment-3969/pods/webserver-deployment-595b5b9587-5v9ft ff33b776-e8b9-40ba-8b0e-73c9a77911bd 3909630 0 2020-01-24 00:07:19 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 93c3fd44-37e9-40da-a4bb-4f3a7d24040b 0xc003578937 0xc003578938}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-s2nf2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-s2nf2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-s2nf2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-24 00:07:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-24 00:07:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-24 00:07:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-24 00:07:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:10.32.0.7,StartTime:2020-01-24 00:07:19 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-24 00:07:45 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://27332c13381a56e7bd6b5ee6406004e316d6f88bfb14ea45deee858e651338f8,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.7,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 24 00:08:05.739: INFO: Pod "webserver-deployment-595b5b9587-676t4" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-676t4 webserver-deployment-595b5b9587- deployment-3969 /api/v1/namespaces/deployment-3969/pods/webserver-deployment-595b5b9587-676t4 174f9a50-c619-4545-b981-ce9a112f28a9 3909750 0 2020-01-24 00:07:56 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 93c3fd44-37e9-40da-a4bb-4f3a7d24040b 0xc003578aa0 0xc003578aa1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-s2nf2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-s2nf2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-s2nf2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-24 00:07:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 24 00:08:05.739: INFO: Pod "webserver-deployment-595b5b9587-8r6tq" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-8r6tq webserver-deployment-595b5b9587- deployment-3969 /api/v1/namespaces/deployment-3969/pods/webserver-deployment-595b5b9587-8r6tq f13680ab-f0b5-4937-a5fb-55deaf38cac8 3909766 0 2020-01-24 00:07:56 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 93c3fd44-37e9-40da-a4bb-4f3a7d24040b 0xc003578ba7 0xc003578ba8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-s2nf2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-s2nf2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-s2nf2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-24 00:07:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 24 00:08:05.739: INFO: Pod "webserver-deployment-595b5b9587-98r89" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-98r89 webserver-deployment-595b5b9587- deployment-3969 /api/v1/namespaces/deployment-3969/pods/webserver-deployment-595b5b9587-98r89 093b68dd-8eea-46dd-940e-86579b3f5689 3909627 0 2020-01-24 00:07:19 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 93c3fd44-37e9-40da-a4bb-4f3a7d24040b 0xc003578cb7 0xc003578cb8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-s2nf2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-s2nf2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-s2nf2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-24 00:07:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-24 00:07:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-24 00:07:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-24 00:07:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:10.32.0.8,StartTime:2020-01-24 00:07:19 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-24 00:07:46 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://58c9d06324d0c140e18fdc4ae190c58aacb8c20b87a6a3aa1b55851e3ec0d05b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.8,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 24 00:08:05.740: INFO: Pod "webserver-deployment-595b5b9587-clbv2" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-clbv2 webserver-deployment-595b5b9587- deployment-3969 /api/v1/namespaces/deployment-3969/pods/webserver-deployment-595b5b9587-clbv2 946cd699-95da-4bac-b0bc-6853910a7559 3909613 0 2020-01-24 00:07:19 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 93c3fd44-37e9-40da-a4bb-4f3a7d24040b 0xc003578e20 0xc003578e21}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-s2nf2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-s2nf2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-s2nf2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-24 00:07:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-24 00:07:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-24 00:07:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-24 00:07:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.3,StartTime:2020-01-24 00:07:19 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-24 00:07:44 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://bfb6d2557624e00d254180a16688e4c9db316025d8f79d6b374aab4526d483c2,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.3,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 24 00:08:05.740: INFO: Pod "webserver-deployment-595b5b9587-h4spp" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-h4spp webserver-deployment-595b5b9587- deployment-3969 /api/v1/namespaces/deployment-3969/pods/webserver-deployment-595b5b9587-h4spp dbf0c86c-0687-4a88-b850-670cc30ccdcf 3909587 0 2020-01-24 00:07:19 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 93c3fd44-37e9-40da-a4bb-4f3a7d24040b 0xc003578f90 0xc003578f91}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-s2nf2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-s2nf2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-s2nf2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-24 00:07:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-24 00:07:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-24 00:07:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-24 00:07:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:10.32.0.4,StartTime:2020-01-24 00:07:19 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-24 00:07:43 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://50dedf90aa3dbed936054b38d2cf83bf9e4ed66f3f0e825e61493cf4b32e624e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.4,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 24 00:08:05.740: INFO: Pod "webserver-deployment-595b5b9587-h9gjp" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-h9gjp webserver-deployment-595b5b9587- deployment-3969 /api/v1/namespaces/deployment-3969/pods/webserver-deployment-595b5b9587-h9gjp a13d8a32-d35f-470d-b619-6c7999d1a1f8 3909746 0 2020-01-24 00:07:56 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 93c3fd44-37e9-40da-a4bb-4f3a7d24040b 0xc0035790f0 0xc0035790f1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-s2nf2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-s2nf2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-s2nf2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-24 00:07:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 24 00:08:05.741: INFO: Pod "webserver-deployment-595b5b9587-hjr8j" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-hjr8j webserver-deployment-595b5b9587- deployment-3969 /api/v1/namespaces/deployment-3969/pods/webserver-deployment-595b5b9587-hjr8j 10961758-91cc-4503-b2d7-dc70d1a341df 3909616 0 2020-01-24 00:07:19 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 93c3fd44-37e9-40da-a4bb-4f3a7d24040b 0xc003579207 0xc003579208}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-s2nf2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-s2nf2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-s2nf2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-24 00:07:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-24 00:07:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-24 00:07:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-24 00:07:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.5,StartTime:2020-01-24 00:07:19 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-24 00:07:44 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://5a206f611061369eb489f21c2e822c2c0dfc4f96519ffa59c4ee99675621f0e5,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.5,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 24 00:08:05.741: INFO: Pod "webserver-deployment-595b5b9587-mc4l6" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-mc4l6 webserver-deployment-595b5b9587- deployment-3969 /api/v1/namespaces/deployment-3969/pods/webserver-deployment-595b5b9587-mc4l6 cabd6808-5c40-47ca-ad90-b6712e076c73 3909594 0 2020-01-24 00:07:19 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 93c3fd44-37e9-40da-a4bb-4f3a7d24040b 0xc0035793a0 0xc0035793a1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-s2nf2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-s2nf2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-s2nf2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-24 00:07:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-24 00:07:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-24 00:07:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-24 00:07:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.1,StartTime:2020-01-24 00:07:19 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-24 00:07:44 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://eeab5a75372620d0b5f9e301c52fada57cfda6358d75ee93c26b883e99b88993,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.1,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 24 00:08:05.741: INFO: Pod "webserver-deployment-595b5b9587-mrspm" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-mrspm webserver-deployment-595b5b9587- deployment-3969 /api/v1/namespaces/deployment-3969/pods/webserver-deployment-595b5b9587-mrspm fa3d323a-c9ac-4162-8764-296a86f2cbb3 3909762 0 2020-01-24 00:07:56 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 93c3fd44-37e9-40da-a4bb-4f3a7d24040b 0xc003579520 0xc003579521}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-s2nf2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-s2nf2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-s2nf2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-24 00:07:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 24 00:08:05.742: INFO: Pod "webserver-deployment-595b5b9587-nq99g" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-nq99g webserver-deployment-595b5b9587- deployment-3969 /api/v1/namespaces/deployment-3969/pods/webserver-deployment-595b5b9587-nq99g 498a6a50-8b59-463d-a4d7-b64397075098 3909785 0 2020-01-24 00:07:54 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 93c3fd44-37e9-40da-a4bb-4f3a7d24040b 0xc003579627 0xc003579628}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-s2nf2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-s2nf2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-s2nf2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-24 00:07:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-24 00:07:57 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-24 00:07:57 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-24 00:07:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-01-24 00:07:57 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 24 00:08:05.742: INFO: Pod "webserver-deployment-595b5b9587-vlmnw" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-vlmnw webserver-deployment-595b5b9587- deployment-3969 /api/v1/namespaces/deployment-3969/pods/webserver-deployment-595b5b9587-vlmnw a1779dfd-009e-41b7-ba26-5b902616ad95 3909794 0 2020-01-24 00:07:54 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 93c3fd44-37e9-40da-a4bb-4f3a7d24040b 0xc003579777 0xc003579778}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-s2nf2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-s2nf2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-s2nf2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-24 00:07:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-24 00:07:58 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-24 00:07:58 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-24 00:07:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-01-24 00:07:58 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 24 00:08:05.743: INFO: Pod "webserver-deployment-595b5b9587-w5cb5" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-w5cb5 webserver-deployment-595b5b9587- deployment-3969 /api/v1/namespaces/deployment-3969/pods/webserver-deployment-595b5b9587-w5cb5 e6b5714a-db54-401f-bfd1-e2d93158bc5a 3909745 0 2020-01-24 00:07:56 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 93c3fd44-37e9-40da-a4bb-4f3a7d24040b 0xc0035798c7 0xc0035798c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-s2nf2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-s2nf2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-s2nf2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-24 00:07:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 24 00:08:05.743: INFO: Pod "webserver-deployment-595b5b9587-w7lq8" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-w7lq8 webserver-deployment-595b5b9587- deployment-3969 /api/v1/namespaces/deployment-3969/pods/webserver-deployment-595b5b9587-w7lq8 d7155278-1bbf-4f7b-9a26-b868ce1752a7 3909782 0 2020-01-24 00:07:54 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 93c3fd44-37e9-40da-a4bb-4f3a7d24040b 0xc0035799e7 0xc0035799e8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-s2nf2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-s2nf2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-s2nf2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-24 00:07:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-24 00:07:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-24 00:07:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-24 00:07:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-01-24 00:07:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 24 00:08:05.743: INFO: Pod "webserver-deployment-595b5b9587-xtdqd" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-xtdqd webserver-deployment-595b5b9587- deployment-3969 /api/v1/namespaces/deployment-3969/pods/webserver-deployment-595b5b9587-xtdqd 5b698a1a-df5b-4d0a-b288-33d8ee229f0a 3909607 0 2020-01-24 00:07:19 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 93c3fd44-37e9-40da-a4bb-4f3a7d24040b 0xc003579b47 0xc003579b48}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-s2nf2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-s2nf2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-s2nf2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-24 00:07:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-24 00:07:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-24 00:07:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-24 00:07:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.4,StartTime:2020-01-24 00:07:19 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-24 00:07:44 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://2d8757c361c4344d765181cc448b4df37a973e4e9977687b3cbcbd0d1107a5c2,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.4,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 24 00:08:05.744: INFO: Pod "webserver-deployment-595b5b9587-ztt2s" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-ztt2s webserver-deployment-595b5b9587- deployment-3969 /api/v1/namespaces/deployment-3969/pods/webserver-deployment-595b5b9587-ztt2s 64e8163a-34f0-4b43-8998-cccf49dfd63f 3909601 0 2020-01-24 00:07:19 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 93c3fd44-37e9-40da-a4bb-4f3a7d24040b 0xc003579cc0 0xc003579cc1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-s2nf2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-s2nf2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-s2nf2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-24 00:07:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-24 00:07:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-24 00:07:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-24 00:07:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.2,StartTime:2020-01-24 00:07:19 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-24 00:07:44 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://1b9268e4116bbeeb351076752dd1e6827667ae30786f84d680da07858b752555,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.2,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 24 00:08:05.744: INFO: Pod "webserver-deployment-c7997dcc8-4xnb2" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-4xnb2 webserver-deployment-c7997dcc8- deployment-3969 /api/v1/namespaces/deployment-3969/pods/webserver-deployment-c7997dcc8-4xnb2 6d17f09f-e285-49d0-b8f5-f4680c1c5194 3909742 0 2020-01-24 00:07:56 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 5311b13c-32e8-4b5e-9ea8-b0c98d5cf168 0xc003579e30 0xc003579e31}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-s2nf2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-s2nf2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-s2nf2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-24 00:07:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 24 00:08:05.744: INFO: Pod "webserver-deployment-c7997dcc8-9l9fk" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-9l9fk webserver-deployment-c7997dcc8- deployment-3969 /api/v1/namespaces/deployment-3969/pods/webserver-deployment-c7997dcc8-9l9fk 14979169-6a93-43f1-b0d2-856be1ae0a19 3909744 0 2020-01-24 00:07:56 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 5311b13c-32e8-4b5e-9ea8-b0c98d5cf168 0xc003579f47 0xc003579f48}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-s2nf2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-s2nf2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-s2nf2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-24 00:07:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 24 00:08:05.744: INFO: Pod "webserver-deployment-c7997dcc8-cx96p" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-cx96p webserver-deployment-c7997dcc8- deployment-3969 /api/v1/namespaces/deployment-3969/pods/webserver-deployment-c7997dcc8-cx96p a546ff5a-2c6c-439e-9d1b-7dca61e2b689 3909764 0 2020-01-24 00:07:56 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 5311b13c-32e8-4b5e-9ea8-b0c98d5cf168 0xc003772077 0xc003772078}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-s2nf2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-s2nf2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-s2nf2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-24 00:07:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 24 00:08:05.745: INFO: Pod "webserver-deployment-c7997dcc8-d95jd" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-d95jd webserver-deployment-c7997dcc8- deployment-3969 /api/v1/namespaces/deployment-3969/pods/webserver-deployment-c7997dcc8-d95jd b7e6d9ee-72e2-43b9-bd60-79fe892fa410 3909728 0 2020-01-24 00:07:54 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 5311b13c-32e8-4b5e-9ea8-b0c98d5cf168 0xc003772197 0xc003772198}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-s2nf2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-s2nf2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-s2nf2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-24 00:07:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 24 00:08:05.745: INFO: Pod "webserver-deployment-c7997dcc8-dhqcf" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-dhqcf webserver-deployment-c7997dcc8- deployment-3969 /api/v1/namespaces/deployment-3969/pods/webserver-deployment-c7997dcc8-dhqcf 02239dd8-23b9-4b5b-82ab-595bec4ac6ee 3909747 0 2020-01-24 00:07:56 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 5311b13c-32e8-4b5e-9ea8-b0c98d5cf168 0xc0037722b7 0xc0037722b8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-s2nf2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-s2nf2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-s2nf2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-24 00:07:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 24 00:08:05.745: INFO: Pod "webserver-deployment-c7997dcc8-fqlwx" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-fqlwx webserver-deployment-c7997dcc8- deployment-3969 /api/v1/namespaces/deployment-3969/pods/webserver-deployment-c7997dcc8-fqlwx 26d5663d-250b-4010-9be1-b6a06c3f4688 3909669 0 2020-01-24 00:07:48 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 5311b13c-32e8-4b5e-9ea8-b0c98d5cf168 0xc0037723e7 0xc0037723e8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-s2nf2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-s2nf2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-s2nf2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-24 00:07:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-24 00:07:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-24 00:07:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-24 00:07:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-01-24 00:07:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 24 00:08:05.746: INFO: Pod "webserver-deployment-c7997dcc8-fvvgx" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-fvvgx webserver-deployment-c7997dcc8- deployment-3969 /api/v1/namespaces/deployment-3969/pods/webserver-deployment-c7997dcc8-fvvgx d6999927-2338-49e1-8a24-5730ad5601d7 3909791 0 2020-01-24 00:07:54 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 5311b13c-32e8-4b5e-9ea8-b0c98d5cf168 0xc003772557 0xc003772558}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-s2nf2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-s2nf2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-s2nf2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-24 00:07:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-24 00:07:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-24 00:07:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-24 00:07:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-01-24 00:07:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 24 00:08:05.746: INFO: Pod "webserver-deployment-c7997dcc8-gqb4c" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-gqb4c webserver-deployment-c7997dcc8- deployment-3969 /api/v1/namespaces/deployment-3969/pods/webserver-deployment-c7997dcc8-gqb4c 4a2a3e26-05c6-475e-8961-5b752c7df6a3 3909769 0 2020-01-24 00:07:54 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 5311b13c-32e8-4b5e-9ea8-b0c98d5cf168 0xc0037726d7 0xc0037726d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-s2nf2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-s2nf2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-s2nf2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-24 00:07:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-24 00:07:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-24 00:07:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-24 00:07:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-01-24 00:07:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 24 00:08:05.747: INFO: Pod "webserver-deployment-c7997dcc8-khj2v" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-khj2v webserver-deployment-c7997dcc8- deployment-3969 /api/v1/namespaces/deployment-3969/pods/webserver-deployment-c7997dcc8-khj2v 236c4cbd-e542-4250-bf57-dc4e21aab2f1 3909687 0 2020-01-24 00:07:48 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 5311b13c-32e8-4b5e-9ea8-b0c98d5cf168 0xc003772847 0xc003772848}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-s2nf2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-s2nf2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-s2nf2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-24 00:07:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-24 00:07:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-24 00:07:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-24 00:07:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-01-24 00:07:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 24 00:08:05.747: INFO: Pod "webserver-deployment-c7997dcc8-lj4ss" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-lj4ss webserver-deployment-c7997dcc8- deployment-3969 /api/v1/namespaces/deployment-3969/pods/webserver-deployment-c7997dcc8-lj4ss cea33c0d-deaf-456d-8f5f-06d4ed9a575a 3909765 0 2020-01-24 00:07:48 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 5311b13c-32e8-4b5e-9ea8-b0c98d5cf168 0xc0037729b7 0xc0037729b8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-s2nf2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-s2nf2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-s2nf2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-24 00:07:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-24 00:07:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-24 00:07:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-24 00:07:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.6,StartTime:2020-01-24 00:07:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = Error response from daemon: pull access denied for webserver, repository does not exist or may require 'docker login',},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.6,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 24 00:08:05.747: INFO: Pod "webserver-deployment-c7997dcc8-ln5ts" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-ln5ts webserver-deployment-c7997dcc8- deployment-3969 /api/v1/namespaces/deployment-3969/pods/webserver-deployment-c7997dcc8-ln5ts 607fdaaf-3c77-4a06-91e6-1286b07bbddb 3909684 0 2020-01-24 00:07:48 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 5311b13c-32e8-4b5e-9ea8-b0c98d5cf168 0xc003772b60 0xc003772b61}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-s2nf2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-s2nf2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-s2nf2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-24 00:07:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-24 00:07:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-24 00:07:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-24 00:07:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-01-24 00:07:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 24 00:08:05.748: INFO: Pod "webserver-deployment-c7997dcc8-rlwgw" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-rlwgw webserver-deployment-c7997dcc8- deployment-3969 /api/v1/namespaces/deployment-3969/pods/webserver-deployment-c7997dcc8-rlwgw 00fe313b-900d-4848-b7a7-47b64f6300f1 3909673 0 2020-01-24 00:07:48 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 5311b13c-32e8-4b5e-9ea8-b0c98d5cf168 0xc003772cd7 0xc003772cd8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-s2nf2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-s2nf2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-s2nf2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-24 00:07:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-24 00:07:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-24 00:07:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-24 00:07:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-01-24 00:07:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 24 00:08:05.748: INFO: Pod "webserver-deployment-c7997dcc8-ztjjf" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-ztjjf webserver-deployment-c7997dcc8- deployment-3969 /api/v1/namespaces/deployment-3969/pods/webserver-deployment-c7997dcc8-ztjjf fc964c51-00d4-49f4-b242-65c9a9f6a89d 3909732 0 2020-01-24 00:07:56 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 5311b13c-32e8-4b5e-9ea8-b0c98d5cf168 0xc003772e57 0xc003772e58}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-s2nf2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-s2nf2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-s2nf2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-24 00:07:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:08:05.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3969" for this suite. • [SLOW TEST:50.527 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":278,"completed":76,"skipped":1163,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:08:09.742: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:08:47.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3983" for this suite. • [SLOW TEST:43.158 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":278,"completed":77,"skipped":1197,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:08:52.901: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:279 [BeforeEach] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1734 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: running the image docker.io/library/httpd:2.4.38-alpine Jan 24 00:08:55.826: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --generator=deployment/apps.v1 --namespace=kubectl-509' Jan 24 00:08:57.006: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 24 00:08:57.006: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the deployment e2e-test-httpd-deployment was created STEP: verifying the pod controlled by deployment e2e-test-httpd-deployment was created [AfterEach] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1739 Jan 24 00:08:59.424: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-509' Jan 24 00:08:59.783: INFO: stderr: "" Jan 24 00:08:59.784: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:08:59.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-509" for this suite. • [SLOW TEST:7.142 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1730 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance]","total":278,"completed":78,"skipped":1210,"failed":0} SSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:09:00.044: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Jan 24 00:09:00.440: INFO: Waiting up to 5m0s for pod "downwardapi-volume-dc6d04f2-2836-4f96-bb47-44306b428942" in namespace "projected-6983" to be "success or failure" Jan 24 00:09:00.460: INFO: Pod "downwardapi-volume-dc6d04f2-2836-4f96-bb47-44306b428942": Phase="Pending", Reason="", readiness=false. Elapsed: 19.782905ms Jan 24 00:09:02.647: INFO: Pod "downwardapi-volume-dc6d04f2-2836-4f96-bb47-44306b428942": Phase="Pending", Reason="", readiness=false. Elapsed: 2.207024896s Jan 24 00:09:05.225: INFO: Pod "downwardapi-volume-dc6d04f2-2836-4f96-bb47-44306b428942": Phase="Pending", Reason="", readiness=false. Elapsed: 4.785426298s Jan 24 00:09:07.892: INFO: Pod "downwardapi-volume-dc6d04f2-2836-4f96-bb47-44306b428942": Phase="Pending", Reason="", readiness=false. Elapsed: 7.451920064s Jan 24 00:09:10.401: INFO: Pod "downwardapi-volume-dc6d04f2-2836-4f96-bb47-44306b428942": Phase="Pending", Reason="", readiness=false. Elapsed: 9.961230568s Jan 24 00:09:13.013: INFO: Pod "downwardapi-volume-dc6d04f2-2836-4f96-bb47-44306b428942": Phase="Pending", Reason="", readiness=false. Elapsed: 12.573587323s Jan 24 00:09:16.105: INFO: Pod "downwardapi-volume-dc6d04f2-2836-4f96-bb47-44306b428942": Phase="Pending", Reason="", readiness=false. Elapsed: 15.665497433s Jan 24 00:09:18.475: INFO: Pod "downwardapi-volume-dc6d04f2-2836-4f96-bb47-44306b428942": Phase="Pending", Reason="", readiness=false. Elapsed: 18.034878094s Jan 24 00:09:20.732: INFO: Pod "downwardapi-volume-dc6d04f2-2836-4f96-bb47-44306b428942": Phase="Pending", Reason="", readiness=false. Elapsed: 20.292321569s Jan 24 00:09:23.986: INFO: Pod "downwardapi-volume-dc6d04f2-2836-4f96-bb47-44306b428942": Phase="Pending", Reason="", readiness=false. Elapsed: 23.546015452s Jan 24 00:09:25.990: INFO: Pod "downwardapi-volume-dc6d04f2-2836-4f96-bb47-44306b428942": Phase="Pending", Reason="", readiness=false. Elapsed: 25.549863975s Jan 24 00:09:28.277: INFO: Pod "downwardapi-volume-dc6d04f2-2836-4f96-bb47-44306b428942": Phase="Pending", Reason="", readiness=false. Elapsed: 27.837663189s Jan 24 00:09:31.223: INFO: Pod "downwardapi-volume-dc6d04f2-2836-4f96-bb47-44306b428942": Phase="Pending", Reason="", readiness=false. Elapsed: 30.783105529s Jan 24 00:09:33.786: INFO: Pod "downwardapi-volume-dc6d04f2-2836-4f96-bb47-44306b428942": Phase="Pending", Reason="", readiness=false. Elapsed: 33.345706004s Jan 24 00:09:35.796: INFO: Pod "downwardapi-volume-dc6d04f2-2836-4f96-bb47-44306b428942": Phase="Pending", Reason="", readiness=false. Elapsed: 35.356190827s Jan 24 00:09:37.808: INFO: Pod "downwardapi-volume-dc6d04f2-2836-4f96-bb47-44306b428942": Phase="Pending", Reason="", readiness=false. Elapsed: 37.36796935s Jan 24 00:09:39.813: INFO: Pod "downwardapi-volume-dc6d04f2-2836-4f96-bb47-44306b428942": Phase="Pending", Reason="", readiness=false. Elapsed: 39.373504722s Jan 24 00:09:41.833: INFO: Pod "downwardapi-volume-dc6d04f2-2836-4f96-bb47-44306b428942": Phase="Succeeded", Reason="", readiness=false. Elapsed: 41.393188668s STEP: Saw pod success Jan 24 00:09:41.833: INFO: Pod "downwardapi-volume-dc6d04f2-2836-4f96-bb47-44306b428942" satisfied condition "success or failure" Jan 24 00:09:41.846: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-dc6d04f2-2836-4f96-bb47-44306b428942 container client-container: STEP: delete the pod Jan 24 00:09:42.032: INFO: Waiting for pod downwardapi-volume-dc6d04f2-2836-4f96-bb47-44306b428942 to disappear Jan 24 00:09:42.037: INFO: Pod downwardapi-volume-dc6d04f2-2836-4f96-bb47-44306b428942 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:09:42.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6983" for this suite. • [SLOW TEST:42.009 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":79,"skipped":1213,"failed":0} SSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:09:42.054: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Jan 24 00:09:42.309: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e5fc9665-4666-44f2-a135-f8437c8de628" in namespace "projected-500" to be "success or failure" Jan 24 00:09:42.333: INFO: Pod "downwardapi-volume-e5fc9665-4666-44f2-a135-f8437c8de628": Phase="Pending", Reason="", readiness=false. Elapsed: 23.789219ms Jan 24 00:09:44.340: INFO: Pod "downwardapi-volume-e5fc9665-4666-44f2-a135-f8437c8de628": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03047731s Jan 24 00:09:46.348: INFO: Pod "downwardapi-volume-e5fc9665-4666-44f2-a135-f8437c8de628": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038504612s Jan 24 00:09:48.356: INFO: Pod "downwardapi-volume-e5fc9665-4666-44f2-a135-f8437c8de628": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046640613s Jan 24 00:09:50.363: INFO: Pod "downwardapi-volume-e5fc9665-4666-44f2-a135-f8437c8de628": Phase="Pending", Reason="", readiness=false. Elapsed: 8.053770986s Jan 24 00:09:52.367: INFO: Pod "downwardapi-volume-e5fc9665-4666-44f2-a135-f8437c8de628": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.058145664s STEP: Saw pod success Jan 24 00:09:52.367: INFO: Pod "downwardapi-volume-e5fc9665-4666-44f2-a135-f8437c8de628" satisfied condition "success or failure" Jan 24 00:09:52.370: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-e5fc9665-4666-44f2-a135-f8437c8de628 container client-container: STEP: delete the pod Jan 24 00:09:52.422: INFO: Waiting for pod downwardapi-volume-e5fc9665-4666-44f2-a135-f8437c8de628 to disappear Jan 24 00:09:52.481: INFO: Pod downwardapi-volume-e5fc9665-4666-44f2-a135-f8437c8de628 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:09:52.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-500" for this suite. • [SLOW TEST:10.438 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":80,"skipped":1217,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:09:52.494: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward api env vars Jan 24 00:09:52.765: INFO: Waiting up to 5m0s for pod "downward-api-d615e48b-a7b1-4d28-883b-21b92268c3b6" in namespace "downward-api-3518" to be "success or failure" Jan 24 00:09:52.915: INFO: Pod "downward-api-d615e48b-a7b1-4d28-883b-21b92268c3b6": Phase="Pending", Reason="", readiness=false. Elapsed: 150.409268ms Jan 24 00:09:54.922: INFO: Pod "downward-api-d615e48b-a7b1-4d28-883b-21b92268c3b6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.156733885s Jan 24 00:09:56.936: INFO: Pod "downward-api-d615e48b-a7b1-4d28-883b-21b92268c3b6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.171527347s Jan 24 00:09:58.943: INFO: Pod "downward-api-d615e48b-a7b1-4d28-883b-21b92268c3b6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.17843576s Jan 24 00:10:00.947: INFO: Pod "downward-api-d615e48b-a7b1-4d28-883b-21b92268c3b6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.18260999s Jan 24 00:10:02.955: INFO: Pod "downward-api-d615e48b-a7b1-4d28-883b-21b92268c3b6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.190420591s STEP: Saw pod success Jan 24 00:10:02.955: INFO: Pod "downward-api-d615e48b-a7b1-4d28-883b-21b92268c3b6" satisfied condition "success or failure" Jan 24 00:10:02.962: INFO: Trying to get logs from node jerma-node pod downward-api-d615e48b-a7b1-4d28-883b-21b92268c3b6 container dapi-container: STEP: delete the pod Jan 24 00:10:03.004: INFO: Waiting for pod downward-api-d615e48b-a7b1-4d28-883b-21b92268c3b6 to disappear Jan 24 00:10:03.009: INFO: Pod downward-api-d615e48b-a7b1-4d28-883b-21b92268c3b6 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:10:03.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3518" for this suite. • [SLOW TEST:10.553 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":278,"completed":81,"skipped":1263,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:10:03.047: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 24 00:10:03.849: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 24 00:10:05.868: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715421403, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715421403, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715421403, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715421403, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 24 00:10:07.874: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715421403, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715421403, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715421403, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715421403, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 24 00:10:09.874: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715421403, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715421403, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715421403, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715421403, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 24 00:10:11.874: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715421403, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715421403, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715421403, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715421403, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 24 00:10:14.911: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Jan 24 00:10:22.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config attach --namespace=webhook-2844 to-be-attached-pod -i -c=container1' Jan 24 00:10:23.172: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:10:23.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2844" for this suite. STEP: Destroying namespace "webhook-2844-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:20.256 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":278,"completed":82,"skipped":1269,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:10:23.304: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating the pod Jan 24 00:10:35.968: INFO: Successfully updated pod "labelsupdate15671370-67ba-4ac0-a614-1166734dc5e3" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:10:40.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3176" for this suite. • [SLOW TEST:16.757 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":83,"skipped":1301,"failed":0} S ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:10:40.061: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0124 00:10:50.573615 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 24 00:10:50.573: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:10:50.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8987" for this suite. • [SLOW TEST:10.931 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":278,"completed":84,"skipped":1302,"failed":0} S ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:10:50.992: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Jan 24 00:10:51.667: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-8777 /api/v1/namespaces/watch-8777/configmaps/e2e-watch-test-label-changed e2649e38-e8fc-4a05-9b41-4dae97f06702 3910630 0 2020-01-24 00:10:51 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Jan 24 00:10:51.667: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-8777 /api/v1/namespaces/watch-8777/configmaps/e2e-watch-test-label-changed e2649e38-e8fc-4a05-9b41-4dae97f06702 3910631 0 2020-01-24 00:10:51 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Jan 24 00:10:51.667: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-8777 /api/v1/namespaces/watch-8777/configmaps/e2e-watch-test-label-changed e2649e38-e8fc-4a05-9b41-4dae97f06702 3910632 0 2020-01-24 00:10:51 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Jan 24 00:11:01.748: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-8777 /api/v1/namespaces/watch-8777/configmaps/e2e-watch-test-label-changed e2649e38-e8fc-4a05-9b41-4dae97f06702 3910672 0 2020-01-24 00:10:51 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jan 24 00:11:01.749: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-8777 /api/v1/namespaces/watch-8777/configmaps/e2e-watch-test-label-changed e2649e38-e8fc-4a05-9b41-4dae97f06702 3910673 0 2020-01-24 00:10:51 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Jan 24 00:11:01.749: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-8777 /api/v1/namespaces/watch-8777/configmaps/e2e-watch-test-label-changed e2649e38-e8fc-4a05-9b41-4dae97f06702 3910674 0 2020-01-24 00:10:51 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:11:01.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8777" for this suite. • [SLOW TEST:10.770 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":278,"completed":85,"skipped":1303,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:11:01.762: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test emptydir 0666 on node default medium Jan 24 00:11:01.881: INFO: Waiting up to 5m0s for pod "pod-a7be62c8-97db-44b7-93bf-2ef46297a269" in namespace "emptydir-7096" to be "success or failure" Jan 24 00:11:01.904: INFO: Pod "pod-a7be62c8-97db-44b7-93bf-2ef46297a269": Phase="Pending", Reason="", readiness=false. Elapsed: 22.123166ms Jan 24 00:11:03.911: INFO: Pod "pod-a7be62c8-97db-44b7-93bf-2ef46297a269": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029731277s Jan 24 00:11:05.920: INFO: Pod "pod-a7be62c8-97db-44b7-93bf-2ef46297a269": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038067507s Jan 24 00:11:07.926: INFO: Pod "pod-a7be62c8-97db-44b7-93bf-2ef46297a269": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044910461s Jan 24 00:11:10.129: INFO: Pod "pod-a7be62c8-97db-44b7-93bf-2ef46297a269": Phase="Pending", Reason="", readiness=false. Elapsed: 8.247719721s Jan 24 00:11:12.136: INFO: Pod "pod-a7be62c8-97db-44b7-93bf-2ef46297a269": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.254292256s STEP: Saw pod success Jan 24 00:11:12.136: INFO: Pod "pod-a7be62c8-97db-44b7-93bf-2ef46297a269" satisfied condition "success or failure" Jan 24 00:11:12.145: INFO: Trying to get logs from node jerma-node pod pod-a7be62c8-97db-44b7-93bf-2ef46297a269 container test-container: STEP: delete the pod Jan 24 00:11:12.250: INFO: Waiting for pod pod-a7be62c8-97db-44b7-93bf-2ef46297a269 to disappear Jan 24 00:11:12.260: INFO: Pod pod-a7be62c8-97db-44b7-93bf-2ef46297a269 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:11:12.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7096" for this suite. • [SLOW TEST:10.509 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":86,"skipped":1314,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:11:12.272: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-2782 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-2782 STEP: Creating statefulset with conflicting port in namespace statefulset-2782 STEP: Waiting until pod test-pod will start running in namespace statefulset-2782 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-2782 Jan 24 00:11:20.463: INFO: Observed stateful pod in namespace: statefulset-2782, name: ss-0, uid: d79c52e9-dc8f-4082-8ef9-71136657d8da, status phase: Pending. Waiting for statefulset controller to delete. Jan 24 00:11:22.308: INFO: Observed stateful pod in namespace: statefulset-2782, name: ss-0, uid: d79c52e9-dc8f-4082-8ef9-71136657d8da, status phase: Failed. Waiting for statefulset controller to delete. Jan 24 00:11:22.321: INFO: Observed stateful pod in namespace: statefulset-2782, name: ss-0, uid: d79c52e9-dc8f-4082-8ef9-71136657d8da, status phase: Failed. Waiting for statefulset controller to delete. Jan 24 00:11:22.362: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-2782 STEP: Removing pod with conflicting port in namespace statefulset-2782 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-2782 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Jan 24 00:11:32.482: INFO: Deleting all statefulset in ns statefulset-2782 Jan 24 00:11:32.490: INFO: Scaling statefulset ss to 0 Jan 24 00:11:42.533: INFO: Waiting for statefulset status.replicas updated to 0 Jan 24 00:11:42.538: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:11:42.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2782" for this suite. • [SLOW TEST:30.314 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":278,"completed":87,"skipped":1346,"failed":0} SSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:11:42.587: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating the pod Jan 24 00:11:51.417: INFO: Successfully updated pod "annotationupdate33386924-bece-468f-aed7-eb0c74c21a83" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:11:55.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8834" for this suite. • [SLOW TEST:12.978 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":88,"skipped":1349,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:11:55.566: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating pod Jan 24 00:12:03.862: INFO: Pod pod-hostip-507d96f5-be3d-4058-bb8d-1ae373b60d23 has hostIP: 10.96.2.250 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:12:03.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5898" for this suite. • [SLOW TEST:8.310 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":278,"completed":89,"skipped":1396,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:12:03.877: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:43 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Jan 24 00:12:19.042: INFO: start=2020-01-24 00:12:14.031145855 +0000 UTC m=+1979.794881167, now=2020-01-24 00:12:19.042083749 +0000 UTC m=+1984.805819081, kubelet pod: {"metadata":{"name":"pod-submit-remove-a4d04561-200b-4e6d-a7df-277d41f86e20","namespace":"pods-3159","selfLink":"/api/v1/namespaces/pods-3159/pods/pod-submit-remove-a4d04561-200b-4e6d-a7df-277d41f86e20","uid":"5a543225-e37f-4f7c-88db-b3307f280a3c","resourceVersion":"3910978","creationTimestamp":"2020-01-24T00:12:03Z","deletionTimestamp":"2020-01-24T00:12:44Z","deletionGracePeriodSeconds":30,"labels":{"name":"foo","time":"939114800"},"annotations":{"kubernetes.io/config.seen":"2020-01-24T00:12:04.000783115Z","kubernetes.io/config.source":"api"}},"spec":{"volumes":[{"name":"default-token-7wrxc","secret":{"secretName":"default-token-7wrxc","defaultMode":420}}],"containers":[{"name":"agnhost","image":"gcr.io/kubernetes-e2e-test-images/agnhost:2.8","args":["pause"],"resources":{},"volumeMounts":[{"name":"default-token-7wrxc","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent"}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"jerma-node","securityContext":{},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}],"priority":0,"enableServiceLinks":true},"status":{"phase":"Pending","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2020-01-24T00:12:04Z"},{"type":"Ready","status":"False","lastProbeTime":null,"lastTransitionTime":"2020-01-24T00:12:18Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost]"},{"type":"ContainersReady","status":"False","lastProbeTime":null,"lastTransitionTime":"2020-01-24T00:12:18Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost]"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2020-01-24T00:12:03Z"}],"hostIP":"10.96.2.250","podIP":"10.44.0.1","podIPs":[{"ip":"10.44.0.1"}],"startTime":"2020-01-24T00:12:04Z","containerStatuses":[{"name":"agnhost","state":{"waiting":{"reason":"ContainerCreating"}},"lastState":{},"ready":false,"restartCount":0,"image":"gcr.io/kubernetes-e2e-test-images/agnhost:2.8","imageID":"","started":false}],"qosClass":"BestEffort"}} Jan 24 00:12:24.043: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:12:24.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3159" for this suite. • [SLOW TEST:20.182 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance]","total":278,"completed":90,"skipped":1445,"failed":0} SSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:12:24.060: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating secret with name secret-test-map-92cc7b90-abdb-4ade-949b-e92fbdb49e3c STEP: Creating a pod to test consume secrets Jan 24 00:12:24.212: INFO: Waiting up to 5m0s for pod "pod-secrets-611a2e95-475e-4699-9331-efb492f17dc6" in namespace "secrets-8706" to be "success or failure" Jan 24 00:12:24.215: INFO: Pod "pod-secrets-611a2e95-475e-4699-9331-efb492f17dc6": Phase="Pending", Reason="", readiness=false. Elapsed: 3.484508ms Jan 24 00:12:26.318: INFO: Pod "pod-secrets-611a2e95-475e-4699-9331-efb492f17dc6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.10626318s Jan 24 00:12:28.323: INFO: Pod "pod-secrets-611a2e95-475e-4699-9331-efb492f17dc6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.110956086s Jan 24 00:12:30.328: INFO: Pod "pod-secrets-611a2e95-475e-4699-9331-efb492f17dc6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.116549759s Jan 24 00:12:32.333: INFO: Pod "pod-secrets-611a2e95-475e-4699-9331-efb492f17dc6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.121568731s STEP: Saw pod success Jan 24 00:12:32.334: INFO: Pod "pod-secrets-611a2e95-475e-4699-9331-efb492f17dc6" satisfied condition "success or failure" Jan 24 00:12:32.337: INFO: Trying to get logs from node jerma-node pod pod-secrets-611a2e95-475e-4699-9331-efb492f17dc6 container secret-volume-test: STEP: delete the pod Jan 24 00:12:32.534: INFO: Waiting for pod pod-secrets-611a2e95-475e-4699-9331-efb492f17dc6 to disappear Jan 24 00:12:32.541: INFO: Pod pod-secrets-611a2e95-475e-4699-9331-efb492f17dc6 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:12:32.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8706" for this suite. • [SLOW TEST:8.500 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":91,"skipped":1451,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:12:32.561: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:73 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Jan 24 00:12:32.797: INFO: Creating deployment "test-recreate-deployment" Jan 24 00:12:32.849: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Jan 24 00:12:32.868: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Jan 24 00:12:34.922: INFO: Waiting deployment "test-recreate-deployment" to complete Jan 24 00:12:34.925: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715421552, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715421552, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715421553, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715421552, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 24 00:12:36.953: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715421552, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715421552, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715421553, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715421552, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 24 00:12:38.935: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715421552, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715421552, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715421553, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715421552, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 24 00:12:40.933: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Jan 24 00:12:40.944: INFO: Updating deployment test-recreate-deployment Jan 24 00:12:40.944: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:67 Jan 24 00:12:41.364: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-2697 /apis/apps/v1/namespaces/deployment-2697/deployments/test-recreate-deployment f4b4994e-d593-479b-9b57-d744f71a7a02 3911166 2 2020-01-24 00:12:32 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00366d3b8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-01-24 00:12:41 +0000 UTC,LastTransitionTime:2020-01-24 00:12:41 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-01-24 00:12:41 +0000 UTC,LastTransitionTime:2020-01-24 00:12:32 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Jan 24 00:12:41.559: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff deployment-2697 /apis/apps/v1/namespaces/deployment-2697/replicasets/test-recreate-deployment-5f94c574ff bb3ab9bf-7b2b-445c-9273-55cfaf95a8ef 3911164 1 2020-01-24 00:12:41 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment f4b4994e-d593-479b-9b57-d744f71a7a02 0xc00366d7a7 0xc00366d7a8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00366d808 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 24 00:12:41.559: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Jan 24 00:12:41.559: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-799c574856 deployment-2697 /apis/apps/v1/namespaces/deployment-2697/replicasets/test-recreate-deployment-799c574856 43706592-e0cf-467f-b20a-ebc73685f7ed 3911155 2 2020-01-24 00:12:32 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment f4b4994e-d593-479b-9b57-d744f71a7a02 0xc00366d887 0xc00366d888}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 799c574856,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00366d8f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 24 00:12:41.568: INFO: Pod "test-recreate-deployment-5f94c574ff-dvkgx" is not available: &Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-dvkgx test-recreate-deployment-5f94c574ff- deployment-2697 /api/v1/namespaces/deployment-2697/pods/test-recreate-deployment-5f94c574ff-dvkgx 72435875-f55c-489a-8d2f-e58c4c8c687b 3911169 0 2020-01-24 00:12:41 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff bb3ab9bf-7b2b-445c-9273-55cfaf95a8ef 0xc004f46c87 0xc004f46c88}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-92lfw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-92lfw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-92lfw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-24 00:12:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-24 00:12:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-24 00:12:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-24 00:12:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-01-24 00:12:41 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:12:41.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-2697" for this suite. • [SLOW TEST:9.021 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":92,"skipped":1470,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:12:41.583: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Jan 24 00:12:41.937: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e4a4f40c-6af8-4b6b-b4dd-abdfb50109a3" in namespace "projected-449" to be "success or failure" Jan 24 00:12:41.965: INFO: Pod "downwardapi-volume-e4a4f40c-6af8-4b6b-b4dd-abdfb50109a3": Phase="Pending", Reason="", readiness=false. Elapsed: 27.393508ms Jan 24 00:12:43.969: INFO: Pod "downwardapi-volume-e4a4f40c-6af8-4b6b-b4dd-abdfb50109a3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032004742s Jan 24 00:12:46.049: INFO: Pod "downwardapi-volume-e4a4f40c-6af8-4b6b-b4dd-abdfb50109a3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.111723263s Jan 24 00:12:48.054: INFO: Pod "downwardapi-volume-e4a4f40c-6af8-4b6b-b4dd-abdfb50109a3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.116460321s Jan 24 00:12:50.060: INFO: Pod "downwardapi-volume-e4a4f40c-6af8-4b6b-b4dd-abdfb50109a3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.122777726s Jan 24 00:12:52.065: INFO: Pod "downwardapi-volume-e4a4f40c-6af8-4b6b-b4dd-abdfb50109a3": Phase="Pending", Reason="", readiness=false. Elapsed: 10.127543683s Jan 24 00:12:54.070: INFO: Pod "downwardapi-volume-e4a4f40c-6af8-4b6b-b4dd-abdfb50109a3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.132966764s STEP: Saw pod success Jan 24 00:12:54.070: INFO: Pod "downwardapi-volume-e4a4f40c-6af8-4b6b-b4dd-abdfb50109a3" satisfied condition "success or failure" Jan 24 00:12:54.073: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-e4a4f40c-6af8-4b6b-b4dd-abdfb50109a3 container client-container: STEP: delete the pod Jan 24 00:12:54.126: INFO: Waiting for pod downwardapi-volume-e4a4f40c-6af8-4b6b-b4dd-abdfb50109a3 to disappear Jan 24 00:12:54.137: INFO: Pod downwardapi-volume-e4a4f40c-6af8-4b6b-b4dd-abdfb50109a3 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:12:54.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-449" for this suite. • [SLOW TEST:12.636 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":93,"skipped":1481,"failed":0} SSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:12:54.219: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Jan 24 00:13:10.449: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 24 00:13:10.538: INFO: Pod pod-with-prestop-http-hook still exists Jan 24 00:13:12.538: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 24 00:13:12.544: INFO: Pod pod-with-prestop-http-hook still exists Jan 24 00:13:14.538: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 24 00:13:14.547: INFO: Pod pod-with-prestop-http-hook still exists Jan 24 00:13:16.538: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 24 00:13:16.547: INFO: Pod pod-with-prestop-http-hook still exists Jan 24 00:13:18.538: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 24 00:13:18.546: INFO: Pod pod-with-prestop-http-hook still exists Jan 24 00:13:20.538: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 24 00:13:20.576: INFO: Pod pod-with-prestop-http-hook still exists Jan 24 00:13:22.538: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 24 00:13:22.548: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:13:22.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-5652" for this suite. • [SLOW TEST:28.373 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":278,"completed":94,"skipped":1484,"failed":0} SS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:13:22.593: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:73 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Jan 24 00:13:22.743: INFO: Pod name rollover-pod: Found 0 pods out of 1 Jan 24 00:13:27.752: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jan 24 00:13:31.814: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Jan 24 00:13:33.843: INFO: Creating deployment "test-rollover-deployment" Jan 24 00:13:33.859: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Jan 24 00:13:35.871: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Jan 24 00:13:35.882: INFO: Ensure that both replica sets have 1 created replica Jan 24 00:13:35.890: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Jan 24 00:13:35.899: INFO: Updating deployment test-rollover-deployment Jan 24 00:13:35.899: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Jan 24 00:13:37.917: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Jan 24 00:13:37.925: INFO: Make sure deployment "test-rollover-deployment" is complete Jan 24 00:13:37.980: INFO: all replica sets need to contain the pod-template-hash label Jan 24 00:13:37.980: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715421613, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715421613, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715421616, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715421613, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 24 00:13:40.025: INFO: all replica sets need to contain the pod-template-hash label Jan 24 00:13:40.025: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715421613, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715421613, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715421616, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715421613, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 24 00:13:41.997: INFO: all replica sets need to contain the pod-template-hash label Jan 24 00:13:41.997: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715421613, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715421613, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715421616, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715421613, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 24 00:13:44.282: INFO: all replica sets need to contain the pod-template-hash label Jan 24 00:13:44.282: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715421613, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715421613, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715421616, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715421613, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 24 00:13:45.991: INFO: all replica sets need to contain the pod-template-hash label Jan 24 00:13:45.991: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715421613, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715421613, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715421624, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715421613, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 24 00:13:47.992: INFO: all replica sets need to contain the pod-template-hash label Jan 24 00:13:47.992: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715421613, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715421613, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715421624, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715421613, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 24 00:13:49.988: INFO: all replica sets need to contain the pod-template-hash label Jan 24 00:13:49.988: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715421613, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715421613, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715421624, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715421613, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 24 00:13:51.992: INFO: all replica sets need to contain the pod-template-hash label Jan 24 00:13:51.993: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715421613, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715421613, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715421624, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715421613, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 24 00:13:53.991: INFO: all replica sets need to contain the pod-template-hash label Jan 24 00:13:53.991: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715421613, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715421613, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715421624, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715421613, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 24 00:13:55.989: INFO: Jan 24 00:13:55.989: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:67 Jan 24 00:13:55.996: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-7100 /apis/apps/v1/namespaces/deployment-7100/deployments/test-rollover-deployment 3ce049cc-b2a3-463c-87f9-0ebf9228af69 3911495 2 2020-01-24 00:13:33 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0037e0f98 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-01-24 00:13:33 +0000 UTC,LastTransitionTime:2020-01-24 00:13:33 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-574d6dfbff" has successfully progressed.,LastUpdateTime:2020-01-24 00:13:54 +0000 UTC,LastTransitionTime:2020-01-24 00:13:33 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Jan 24 00:13:55.999: INFO: New ReplicaSet "test-rollover-deployment-574d6dfbff" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-574d6dfbff deployment-7100 /apis/apps/v1/namespaces/deployment-7100/replicasets/test-rollover-deployment-574d6dfbff 93e90426-286f-4480-a41f-f9c3e7b3e360 3911484 2 2020-01-24 00:13:35 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 3ce049cc-b2a3-463c-87f9-0ebf9228af69 0xc0037e1457 0xc0037e1458}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 574d6dfbff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0037e14c8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jan 24 00:13:55.999: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Jan 24 00:13:55.999: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-7100 /apis/apps/v1/namespaces/deployment-7100/replicasets/test-rollover-controller 729e113a-3f4b-4bea-a2df-f7c660fa8942 3911493 2 2020-01-24 00:13:22 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 3ce049cc-b2a3-463c-87f9-0ebf9228af69 0xc0037e1387 0xc0037e1388}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0037e13e8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 24 00:13:56.000: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c deployment-7100 /apis/apps/v1/namespaces/deployment-7100/replicasets/test-rollover-deployment-f6c94f66c 99722477-84de-403e-9ed9-3bda5bcfc07c 3911433 2 2020-01-24 00:13:33 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 3ce049cc-b2a3-463c-87f9-0ebf9228af69 0xc0037e1530 0xc0037e1531}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0037e15a8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 24 00:13:56.003: INFO: Pod "test-rollover-deployment-574d6dfbff-j8mw7" is available: &Pod{ObjectMeta:{test-rollover-deployment-574d6dfbff-j8mw7 test-rollover-deployment-574d6dfbff- deployment-7100 /api/v1/namespaces/deployment-7100/pods/test-rollover-deployment-574d6dfbff-j8mw7 f2b039e9-b18c-44f0-ab61-caf4508f06b5 3911458 0 2020-01-24 00:13:36 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [{apps/v1 ReplicaSet test-rollover-deployment-574d6dfbff 93e90426-286f-4480-a41f-f9c3e7b3e360 0xc003a8fc57 0xc003a8fc58}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-667hv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-667hv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-667hv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-24 00:13:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-24 00:13:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-24 00:13:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-24 00:13:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.1,StartTime:2020-01-24 00:13:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-24 00:13:43 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:docker://6f2370b2db5b4b280533f23864dd28ee6d808d4cda555fa88f9ce497fdb73eb9,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.1,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:13:56.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7100" for this suite. • [SLOW TEST:33.423 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":278,"completed":95,"skipped":1486,"failed":0} S ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:13:56.016: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3917 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-3917;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3917 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-3917;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3917.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-3917.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3917.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-3917.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3917.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-3917.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3917.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-3917.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3917.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-3917.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3917.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-3917.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3917.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 80.0.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.0.80_udp@PTR;check="$$(dig +tcp +noall +answer +search 80.0.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.0.80_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3917 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-3917;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3917 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-3917;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3917.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-3917.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3917.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-3917.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3917.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-3917.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3917.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-3917.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3917.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-3917.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3917.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-3917.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3917.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 80.0.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.0.80_udp@PTR;check="$$(dig +tcp +noall +answer +search 80.0.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.0.80_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 24 00:14:10.377: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3917/dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b: the server could not find the requested resource (get pods dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b) Jan 24 00:14:10.384: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3917/dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b: the server could not find the requested resource (get pods dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b) Jan 24 00:14:10.392: INFO: Unable to read wheezy_udp@dns-test-service.dns-3917 from pod dns-3917/dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b: the server could not find the requested resource (get pods dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b) Jan 24 00:14:10.401: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3917 from pod dns-3917/dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b: the server could not find the requested resource (get pods dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b) Jan 24 00:14:10.407: INFO: Unable to read wheezy_udp@dns-test-service.dns-3917.svc from pod dns-3917/dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b: the server could not find the requested resource (get pods dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b) Jan 24 00:14:10.414: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3917.svc from pod dns-3917/dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b: the server could not find the requested resource (get pods dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b) Jan 24 00:14:10.422: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3917.svc from pod dns-3917/dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b: the server could not find the requested resource (get pods dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b) Jan 24 00:14:10.428: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3917.svc from pod dns-3917/dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b: the server could not find the requested resource (get pods dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b) Jan 24 00:14:10.465: INFO: Unable to read jessie_udp@dns-test-service from pod dns-3917/dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b: the server could not find the requested resource (get pods dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b) Jan 24 00:14:10.483: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-3917/dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b: the server could not find the requested resource (get pods dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b) Jan 24 00:14:10.491: INFO: Unable to read jessie_udp@dns-test-service.dns-3917 from pod dns-3917/dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b: the server could not find the requested resource (get pods dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b) Jan 24 00:14:10.498: INFO: Unable to read jessie_tcp@dns-test-service.dns-3917 from pod dns-3917/dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b: the server could not find the requested resource (get pods dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b) Jan 24 00:14:10.502: INFO: Unable to read jessie_udp@dns-test-service.dns-3917.svc from pod dns-3917/dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b: the server could not find the requested resource (get pods dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b) Jan 24 00:14:10.505: INFO: Unable to read jessie_tcp@dns-test-service.dns-3917.svc from pod dns-3917/dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b: the server could not find the requested resource (get pods dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b) Jan 24 00:14:10.510: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3917.svc from pod dns-3917/dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b: the server could not find the requested resource (get pods dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b) Jan 24 00:14:10.513: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3917.svc from pod dns-3917/dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b: the server could not find the requested resource (get pods dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b) Jan 24 00:14:10.682: INFO: Lookups using dns-3917/dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3917 wheezy_tcp@dns-test-service.dns-3917 wheezy_udp@dns-test-service.dns-3917.svc wheezy_tcp@dns-test-service.dns-3917.svc wheezy_udp@_http._tcp.dns-test-service.dns-3917.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3917.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3917 jessie_tcp@dns-test-service.dns-3917 jessie_udp@dns-test-service.dns-3917.svc jessie_tcp@dns-test-service.dns-3917.svc jessie_udp@_http._tcp.dns-test-service.dns-3917.svc jessie_tcp@_http._tcp.dns-test-service.dns-3917.svc] Jan 24 00:14:15.695: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3917/dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b: the server could not find the requested resource (get pods dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b) Jan 24 00:14:15.705: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3917/dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b: the server could not find the requested resource (get pods dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b) Jan 24 00:14:15.717: INFO: Unable to read wheezy_udp@dns-test-service.dns-3917 from pod dns-3917/dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b: the server could not find the requested resource (get pods dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b) Jan 24 00:14:15.723: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3917 from pod dns-3917/dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b: the server could not find the requested resource (get pods dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b) Jan 24 00:14:15.730: INFO: Unable to read wheezy_udp@dns-test-service.dns-3917.svc from pod dns-3917/dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b: the server could not find the requested resource (get pods dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b) Jan 24 00:14:15.735: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3917.svc from pod dns-3917/dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b: the server could not find the requested resource (get pods dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b) Jan 24 00:14:15.742: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3917.svc from pod dns-3917/dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b: the server could not find the requested resource (get pods dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b) Jan 24 00:14:15.748: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3917.svc from pod dns-3917/dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b: the server could not find the requested resource (get pods dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b) Jan 24 00:14:15.791: INFO: Unable to read jessie_udp@dns-test-service from pod dns-3917/dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b: the server could not find the requested resource (get pods dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b) Jan 24 00:14:15.796: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-3917/dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b: the server could not find the requested resource (get pods dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b) Jan 24 00:14:15.801: INFO: Unable to read jessie_udp@dns-test-service.dns-3917 from pod dns-3917/dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b: the server could not find the requested resource (get pods dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b) Jan 24 00:14:15.805: INFO: Unable to read jessie_tcp@dns-test-service.dns-3917 from pod dns-3917/dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b: the server could not find the requested resource (get pods dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b) Jan 24 00:14:15.810: INFO: Unable to read jessie_udp@dns-test-service.dns-3917.svc from pod dns-3917/dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b: the server could not find the requested resource (get pods dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b) Jan 24 00:14:15.814: INFO: Unable to read jessie_tcp@dns-test-service.dns-3917.svc from pod dns-3917/dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b: the server could not find the requested resource (get pods dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b) Jan 24 00:14:15.819: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3917.svc from pod dns-3917/dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b: the server could not find the requested resource (get pods dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b) Jan 24 00:14:15.826: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3917.svc from pod dns-3917/dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b: the server could not find the requested resource (get pods dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b) Jan 24 00:14:15.862: INFO: Lookups using dns-3917/dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3917 wheezy_tcp@dns-test-service.dns-3917 wheezy_udp@dns-test-service.dns-3917.svc wheezy_tcp@dns-test-service.dns-3917.svc wheezy_udp@_http._tcp.dns-test-service.dns-3917.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3917.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3917 jessie_tcp@dns-test-service.dns-3917 jessie_udp@dns-test-service.dns-3917.svc jessie_tcp@dns-test-service.dns-3917.svc jessie_udp@_http._tcp.dns-test-service.dns-3917.svc jessie_tcp@_http._tcp.dns-test-service.dns-3917.svc] Jan 24 00:14:20.690: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3917/dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b: the server could not find the requested resource (get pods dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b) Jan 24 00:14:20.695: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3917/dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b: the server could not find the requested resource (get pods dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b) Jan 24 00:14:20.699: INFO: Unable to read wheezy_udp@dns-test-service.dns-3917 from pod dns-3917/dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b: the server could not find the requested resource (get pods dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b) Jan 24 00:14:20.704: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3917 from pod dns-3917/dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b: the server could not find the requested resource (get pods dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b) Jan 24 00:14:20.709: INFO: Unable to read wheezy_udp@dns-test-service.dns-3917.svc from pod dns-3917/dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b: the server could not find the requested resource (get pods dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b) Jan 24 00:14:20.714: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3917.svc from pod dns-3917/dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b: the server could not find the requested resource (get pods dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b) Jan 24 00:14:20.718: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3917.svc from pod dns-3917/dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b: the server could not find the requested resource (get pods dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b) Jan 24 00:14:20.722: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3917.svc from pod dns-3917/dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b: the server could not find the requested resource (get pods dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b) Jan 24 00:14:20.752: INFO: Unable to read jessie_udp@dns-test-service from pod dns-3917/dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b: the server could not find the requested resource (get pods dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b) Jan 24 00:14:20.756: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-3917/dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b: the server could not find the requested resource (get pods dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b) Jan 24 00:14:20.759: INFO: Unable to read jessie_udp@dns-test-service.dns-3917 from pod dns-3917/dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b: the server could not find the requested resource (get pods dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b) Jan 24 00:14:20.763: INFO: Unable to read jessie_tcp@dns-test-service.dns-3917 from pod dns-3917/dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b: the server could not find the requested resource (get pods dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b) Jan 24 00:14:20.767: INFO: Unable to read jessie_udp@dns-test-service.dns-3917.svc from pod dns-3917/dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b: the server could not find the requested resource (get pods dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b) Jan 24 00:14:20.789: INFO: Unable to read jessie_tcp@dns-test-service.dns-3917.svc from pod dns-3917/dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b: the server could not find the requested resource (get pods dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b) Jan 24 00:14:20.793: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3917.svc from pod dns-3917/dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b: the server could not find the requested resource (get pods dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b) Jan 24 00:14:20.796: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3917.svc from pod dns-3917/dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b: the server could not find the requested resource (get pods dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b) Jan 24 00:14:20.821: INFO: Lookups using dns-3917/dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3917 wheezy_tcp@dns-test-service.dns-3917 wheezy_udp@dns-test-service.dns-3917.svc wheezy_tcp@dns-test-service.dns-3917.svc wheezy_udp@_http._tcp.dns-test-service.dns-3917.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3917.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3917 jessie_tcp@dns-test-service.dns-3917 jessie_udp@dns-test-service.dns-3917.svc jessie_tcp@dns-test-service.dns-3917.svc jessie_udp@_http._tcp.dns-test-service.dns-3917.svc jessie_tcp@_http._tcp.dns-test-service.dns-3917.svc] Jan 24 00:14:25.691: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3917/dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b: the server could not find the requested resource (get pods dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b) Jan 24 00:14:25.707: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3917/dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b: the server could not find the requested resource (get pods dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b) Jan 24 00:14:25.713: INFO: Unable to read wheezy_udp@dns-test-service.dns-3917 from pod dns-3917/dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b: the server could not find the requested resource (get pods dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b) Jan 24 00:14:25.719: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3917 from pod dns-3917/dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b: the server could not find the requested resource (get pods dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b) Jan 24 00:14:25.723: INFO: Unable to read wheezy_udp@dns-test-service.dns-3917.svc from pod dns-3917/dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b: the server could not find the requested resource (get pods dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b) Jan 24 00:14:25.728: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3917.svc from pod dns-3917/dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b: the server could not find the requested resource (get pods dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b) Jan 24 00:14:25.733: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3917.svc from pod dns-3917/dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b: the server could not find the requested resource (get pods dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b) Jan 24 00:14:25.856: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3917.svc from pod dns-3917/dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b: the server could not find the requested resource (get pods dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b) Jan 24 00:14:25.899: INFO: Unable to read jessie_udp@dns-test-service from pod dns-3917/dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b: the server could not find the requested resource (get pods dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b) Jan 24 00:14:25.903: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-3917/dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b: the server could not find the requested resource (get pods dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b) Jan 24 00:14:25.908: INFO: Unable to read jessie_udp@dns-test-service.dns-3917 from pod dns-3917/dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b: the server could not find the requested resource (get pods dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b) Jan 24 00:14:25.912: INFO: Unable to read jessie_tcp@dns-test-service.dns-3917 from pod dns-3917/dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b: the server could not find the requested resource (get pods dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b) Jan 24 00:14:25.918: INFO: Unable to read jessie_udp@dns-test-service.dns-3917.svc from pod dns-3917/dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b: the server could not find the requested resource (get pods dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b) Jan 24 00:14:25.923: INFO: Unable to read jessie_tcp@dns-test-service.dns-3917.svc from pod dns-3917/dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b: the server could not find the requested resource (get pods dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b) Jan 24 00:14:25.930: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3917.svc from pod dns-3917/dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b: the server could not find the requested resource (get pods dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b) Jan 24 00:14:25.934: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3917.svc from pod dns-3917/dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b: the server could not find the requested resource (get pods dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b) Jan 24 00:14:25.962: INFO: Lookups using dns-3917/dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3917 wheezy_tcp@dns-test-service.dns-3917 wheezy_udp@dns-test-service.dns-3917.svc wheezy_tcp@dns-test-service.dns-3917.svc wheezy_udp@_http._tcp.dns-test-service.dns-3917.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3917.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3917 jessie_tcp@dns-test-service.dns-3917 jessie_udp@dns-test-service.dns-3917.svc jessie_tcp@dns-test-service.dns-3917.svc jessie_udp@_http._tcp.dns-test-service.dns-3917.svc jessie_tcp@_http._tcp.dns-test-service.dns-3917.svc] Jan 24 00:14:30.724: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3917/dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b: the server could not find the requested resource (get pods dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b) Jan 24 00:14:30.727: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3917/dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b: the server could not find the requested resource (get pods dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b) Jan 24 00:14:30.730: INFO: Unable to read wheezy_udp@dns-test-service.dns-3917 from pod dns-3917/dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b: the server could not find the requested resource (get pods dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b) Jan 24 00:14:30.732: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3917 from pod dns-3917/dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b: the server could not find the requested resource (get pods dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b) Jan 24 00:14:30.734: INFO: Unable to read wheezy_udp@dns-test-service.dns-3917.svc from pod dns-3917/dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b: the server could not find the requested resource (get pods dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b) Jan 24 00:14:30.736: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3917.svc from pod dns-3917/dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b: the server could not find the requested resource (get pods dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b) Jan 24 00:14:30.739: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3917.svc from pod dns-3917/dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b: the server could not find the requested resource (get pods dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b) Jan 24 00:14:30.741: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3917.svc from pod dns-3917/dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b: the server could not find the requested resource (get pods dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b) Jan 24 00:14:30.761: INFO: Unable to read jessie_udp@dns-test-service from pod dns-3917/dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b: the server could not find the requested resource (get pods dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b) Jan 24 00:14:30.764: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-3917/dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b: the server could not find the requested resource (get pods dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b) Jan 24 00:14:30.777: INFO: Unable to read jessie_udp@dns-test-service.dns-3917 from pod dns-3917/dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b: the server could not find the requested resource (get pods dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b) Jan 24 00:14:30.780: INFO: Unable to read jessie_tcp@dns-test-service.dns-3917 from pod dns-3917/dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b: the server could not find the requested resource (get pods dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b) Jan 24 00:14:30.782: INFO: Unable to read jessie_udp@dns-test-service.dns-3917.svc from pod dns-3917/dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b: the server could not find the requested resource (get pods dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b) Jan 24 00:14:30.784: INFO: Unable to read jessie_tcp@dns-test-service.dns-3917.svc from pod dns-3917/dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b: the server could not find the requested resource (get pods dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b) Jan 24 00:14:30.787: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3917.svc from pod dns-3917/dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b: the server could not find the requested resource (get pods dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b) Jan 24 00:14:30.790: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3917.svc from pod dns-3917/dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b: the server could not find the requested resource (get pods dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b) Jan 24 00:14:30.805: INFO: Lookups using dns-3917/dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3917 wheezy_tcp@dns-test-service.dns-3917 wheezy_udp@dns-test-service.dns-3917.svc wheezy_tcp@dns-test-service.dns-3917.svc wheezy_udp@_http._tcp.dns-test-service.dns-3917.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3917.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3917 jessie_tcp@dns-test-service.dns-3917 jessie_udp@dns-test-service.dns-3917.svc jessie_tcp@dns-test-service.dns-3917.svc jessie_udp@_http._tcp.dns-test-service.dns-3917.svc jessie_tcp@_http._tcp.dns-test-service.dns-3917.svc] Jan 24 00:14:35.757: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3917/dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b: the server could not find the requested resource (get pods dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b) Jan 24 00:14:35.784: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3917/dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b: the server could not find the requested resource (get pods dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b) Jan 24 00:14:35.790: INFO: Unable to read wheezy_udp@dns-test-service.dns-3917 from pod dns-3917/dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b: the server could not find the requested resource (get pods dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b) Jan 24 00:14:35.796: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3917 from pod dns-3917/dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b: the server could not find the requested resource (get pods dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b) Jan 24 00:14:35.801: INFO: Unable to read wheezy_udp@dns-test-service.dns-3917.svc from pod dns-3917/dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b: the server could not find the requested resource (get pods dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b) Jan 24 00:14:35.809: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3917.svc from pod dns-3917/dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b: the server could not find the requested resource (get pods dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b) Jan 24 00:14:35.814: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3917.svc from pod dns-3917/dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b: the server could not find the requested resource (get pods dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b) Jan 24 00:14:35.822: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3917.svc from pod dns-3917/dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b: the server could not find the requested resource (get pods dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b) Jan 24 00:14:35.924: INFO: Unable to read jessie_udp@dns-test-service from pod dns-3917/dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b: the server could not find the requested resource (get pods dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b) Jan 24 00:14:35.946: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-3917/dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b: the server could not find the requested resource (get pods dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b) Jan 24 00:14:35.960: INFO: Unable to read jessie_udp@dns-test-service.dns-3917 from pod dns-3917/dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b: the server could not find the requested resource (get pods dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b) Jan 24 00:14:35.968: INFO: Unable to read jessie_tcp@dns-test-service.dns-3917 from pod dns-3917/dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b: the server could not find the requested resource (get pods dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b) Jan 24 00:14:35.974: INFO: Unable to read jessie_udp@dns-test-service.dns-3917.svc from pod dns-3917/dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b: the server could not find the requested resource (get pods dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b) Jan 24 00:14:35.981: INFO: Unable to read jessie_tcp@dns-test-service.dns-3917.svc from pod dns-3917/dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b: the server could not find the requested resource (get pods dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b) Jan 24 00:14:35.986: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3917.svc from pod dns-3917/dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b: the server could not find the requested resource (get pods dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b) Jan 24 00:14:35.994: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3917.svc from pod dns-3917/dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b: the server could not find the requested resource (get pods dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b) Jan 24 00:14:36.084: INFO: Lookups using dns-3917/dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3917 wheezy_tcp@dns-test-service.dns-3917 wheezy_udp@dns-test-service.dns-3917.svc wheezy_tcp@dns-test-service.dns-3917.svc wheezy_udp@_http._tcp.dns-test-service.dns-3917.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3917.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3917 jessie_tcp@dns-test-service.dns-3917 jessie_udp@dns-test-service.dns-3917.svc jessie_tcp@dns-test-service.dns-3917.svc jessie_udp@_http._tcp.dns-test-service.dns-3917.svc jessie_tcp@_http._tcp.dns-test-service.dns-3917.svc] Jan 24 00:14:40.882: INFO: DNS probes using dns-3917/dns-test-9656359f-4874-4c55-a97f-b7e04be9a54b succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:14:41.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3917" for this suite. • [SLOW TEST:45.504 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":278,"completed":96,"skipped":1487,"failed":0} S ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:14:41.521: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test emptydir 0666 on tmpfs Jan 24 00:14:41.751: INFO: Waiting up to 5m0s for pod "pod-80450529-9125-4f68-9054-a271b64c556a" in namespace "emptydir-608" to be "success or failure" Jan 24 00:14:41.760: INFO: Pod "pod-80450529-9125-4f68-9054-a271b64c556a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.682822ms Jan 24 00:14:43.769: INFO: Pod "pod-80450529-9125-4f68-9054-a271b64c556a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017290671s Jan 24 00:14:45.778: INFO: Pod "pod-80450529-9125-4f68-9054-a271b64c556a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026517772s Jan 24 00:14:47.789: INFO: Pod "pod-80450529-9125-4f68-9054-a271b64c556a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037569771s Jan 24 00:14:49.796: INFO: Pod "pod-80450529-9125-4f68-9054-a271b64c556a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.04522118s Jan 24 00:14:51.802: INFO: Pod "pod-80450529-9125-4f68-9054-a271b64c556a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.050810745s STEP: Saw pod success Jan 24 00:14:51.802: INFO: Pod "pod-80450529-9125-4f68-9054-a271b64c556a" satisfied condition "success or failure" Jan 24 00:14:51.806: INFO: Trying to get logs from node jerma-node pod pod-80450529-9125-4f68-9054-a271b64c556a container test-container: STEP: delete the pod Jan 24 00:14:51.861: INFO: Waiting for pod pod-80450529-9125-4f68-9054-a271b64c556a to disappear Jan 24 00:14:51.871: INFO: Pod pod-80450529-9125-4f68-9054-a271b64c556a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:14:51.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-608" for this suite. • [SLOW TEST:10.371 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":97,"skipped":1488,"failed":0} [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:14:51.892: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap configmap-676/configmap-test-1d557c54-d554-4d9e-be41-8da46de7ac7f STEP: Creating a pod to test consume configMaps Jan 24 00:14:52.113: INFO: Waiting up to 5m0s for pod "pod-configmaps-a940d105-d3ef-4519-b62a-98ee6794c112" in namespace "configmap-676" to be "success or failure" Jan 24 00:14:52.162: INFO: Pod "pod-configmaps-a940d105-d3ef-4519-b62a-98ee6794c112": Phase="Pending", Reason="", readiness=false. Elapsed: 49.405698ms Jan 24 00:14:54.167: INFO: Pod "pod-configmaps-a940d105-d3ef-4519-b62a-98ee6794c112": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053516363s Jan 24 00:14:56.173: INFO: Pod "pod-configmaps-a940d105-d3ef-4519-b62a-98ee6794c112": Phase="Pending", Reason="", readiness=false. Elapsed: 4.059828293s Jan 24 00:14:58.179: INFO: Pod "pod-configmaps-a940d105-d3ef-4519-b62a-98ee6794c112": Phase="Pending", Reason="", readiness=false. Elapsed: 6.065645831s Jan 24 00:15:00.183: INFO: Pod "pod-configmaps-a940d105-d3ef-4519-b62a-98ee6794c112": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.069803414s STEP: Saw pod success Jan 24 00:15:00.183: INFO: Pod "pod-configmaps-a940d105-d3ef-4519-b62a-98ee6794c112" satisfied condition "success or failure" Jan 24 00:15:00.185: INFO: Trying to get logs from node jerma-node pod pod-configmaps-a940d105-d3ef-4519-b62a-98ee6794c112 container env-test: STEP: delete the pod Jan 24 00:15:00.275: INFO: Waiting for pod pod-configmaps-a940d105-d3ef-4519-b62a-98ee6794c112 to disappear Jan 24 00:15:00.280: INFO: Pod pod-configmaps-a940d105-d3ef-4519-b62a-98ee6794c112 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:15:00.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-676" for this suite. • [SLOW TEST:8.404 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":278,"completed":98,"skipped":1488,"failed":0} SSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:15:00.296: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:15:13.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6375" for this suite. • [SLOW TEST:13.642 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":278,"completed":99,"skipped":1492,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:15:13.939: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Jan 24 00:15:14.125: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:15:15.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-205" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":278,"completed":100,"skipped":1510,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:15:15.635: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-7353 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating statefulset ss in namespace statefulset-7353 Jan 24 00:15:15.902: INFO: Found 0 stateful pods, waiting for 1 Jan 24 00:15:25.910: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Jan 24 00:15:25.947: INFO: Deleting all statefulset in ns statefulset-7353 Jan 24 00:15:25.998: INFO: Scaling statefulset ss to 0 Jan 24 00:15:46.166: INFO: Waiting for statefulset status.replicas updated to 0 Jan 24 00:15:46.170: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:15:46.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7353" for this suite. • [SLOW TEST:30.599 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":278,"completed":101,"skipped":1543,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:15:46.235: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating the pod Jan 24 00:15:54.885: INFO: Successfully updated pod "annotationupdatec3da9bf3-7667-478c-b104-aa978eed069c" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:15:56.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6062" for this suite. • [SLOW TEST:10.739 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":102,"skipped":1555,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:15:56.974: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name configmap-test-volume-map-f0ff87da-9cbc-4a0e-944d-13d013291127 STEP: Creating a pod to test consume configMaps Jan 24 00:15:57.138: INFO: Waiting up to 5m0s for pod "pod-configmaps-7adfc6c6-e9fc-4b01-93db-86f1ea4060c4" in namespace "configmap-8911" to be "success or failure" Jan 24 00:15:57.149: INFO: Pod "pod-configmaps-7adfc6c6-e9fc-4b01-93db-86f1ea4060c4": Phase="Pending", Reason="", readiness=false. Elapsed: 11.394835ms Jan 24 00:15:59.155: INFO: Pod "pod-configmaps-7adfc6c6-e9fc-4b01-93db-86f1ea4060c4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017290189s Jan 24 00:16:01.161: INFO: Pod "pod-configmaps-7adfc6c6-e9fc-4b01-93db-86f1ea4060c4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02311293s Jan 24 00:16:03.265: INFO: Pod "pod-configmaps-7adfc6c6-e9fc-4b01-93db-86f1ea4060c4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.127427843s Jan 24 00:16:05.271: INFO: Pod "pod-configmaps-7adfc6c6-e9fc-4b01-93db-86f1ea4060c4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.133017512s Jan 24 00:16:07.279: INFO: Pod "pod-configmaps-7adfc6c6-e9fc-4b01-93db-86f1ea4060c4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.140523479s STEP: Saw pod success Jan 24 00:16:07.279: INFO: Pod "pod-configmaps-7adfc6c6-e9fc-4b01-93db-86f1ea4060c4" satisfied condition "success or failure" Jan 24 00:16:07.283: INFO: Trying to get logs from node jerma-node pod pod-configmaps-7adfc6c6-e9fc-4b01-93db-86f1ea4060c4 container configmap-volume-test: STEP: delete the pod Jan 24 00:16:07.424: INFO: Waiting for pod pod-configmaps-7adfc6c6-e9fc-4b01-93db-86f1ea4060c4 to disappear Jan 24 00:16:07.472: INFO: Pod pod-configmaps-7adfc6c6-e9fc-4b01-93db-86f1ea4060c4 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:16:07.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8911" for this suite. • [SLOW TEST:10.518 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":103,"skipped":1568,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:16:07.494: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:279 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:331 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating a replication controller Jan 24 00:16:07.640: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4938' Jan 24 00:16:10.506: INFO: stderr: "" Jan 24 00:16:10.506: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 24 00:16:10.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4938' Jan 24 00:16:10.675: INFO: stderr: "" Jan 24 00:16:10.675: INFO: stdout: "update-demo-nautilus-tp7jc update-demo-nautilus-z42kv " Jan 24 00:16:10.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tp7jc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4938' Jan 24 00:16:10.789: INFO: stderr: "" Jan 24 00:16:10.789: INFO: stdout: "" Jan 24 00:16:10.789: INFO: update-demo-nautilus-tp7jc is created but not running Jan 24 00:16:15.790: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4938' Jan 24 00:16:17.324: INFO: stderr: "" Jan 24 00:16:17.324: INFO: stdout: "update-demo-nautilus-tp7jc update-demo-nautilus-z42kv " Jan 24 00:16:17.325: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tp7jc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4938' Jan 24 00:16:18.031: INFO: stderr: "" Jan 24 00:16:18.031: INFO: stdout: "" Jan 24 00:16:18.031: INFO: update-demo-nautilus-tp7jc is created but not running Jan 24 00:16:23.032: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4938' Jan 24 00:16:23.172: INFO: stderr: "" Jan 24 00:16:23.172: INFO: stdout: "update-demo-nautilus-tp7jc update-demo-nautilus-z42kv " Jan 24 00:16:23.173: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tp7jc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4938' Jan 24 00:16:23.289: INFO: stderr: "" Jan 24 00:16:23.289: INFO: stdout: "true" Jan 24 00:16:23.290: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tp7jc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4938' Jan 24 00:16:23.399: INFO: stderr: "" Jan 24 00:16:23.399: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 24 00:16:23.399: INFO: validating pod update-demo-nautilus-tp7jc Jan 24 00:16:23.408: INFO: got data: { "image": "nautilus.jpg" } Jan 24 00:16:23.408: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 24 00:16:23.408: INFO: update-demo-nautilus-tp7jc is verified up and running Jan 24 00:16:23.408: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-z42kv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4938' Jan 24 00:16:23.578: INFO: stderr: "" Jan 24 00:16:23.578: INFO: stdout: "true" Jan 24 00:16:23.578: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-z42kv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4938' Jan 24 00:16:23.664: INFO: stderr: "" Jan 24 00:16:23.664: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 24 00:16:23.664: INFO: validating pod update-demo-nautilus-z42kv Jan 24 00:16:23.683: INFO: got data: { "image": "nautilus.jpg" } Jan 24 00:16:23.683: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 24 00:16:23.683: INFO: update-demo-nautilus-z42kv is verified up and running STEP: using delete to clean up resources Jan 24 00:16:23.683: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4938' Jan 24 00:16:23.776: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 24 00:16:23.777: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jan 24 00:16:23.777: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-4938' Jan 24 00:16:24.011: INFO: stderr: "No resources found in kubectl-4938 namespace.\n" Jan 24 00:16:24.011: INFO: stdout: "" Jan 24 00:16:24.011: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-4938 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 24 00:16:24.201: INFO: stderr: "" Jan 24 00:16:24.201: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:16:24.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4938" for this suite. • [SLOW TEST:16.730 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:329 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":278,"completed":104,"skipped":1604,"failed":0} SSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:16:24.225: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 24 00:16:26.289: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 24 00:16:28.307: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715421786, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715421786, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715421786, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715421786, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 24 00:16:30.313: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715421786, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715421786, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715421786, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715421786, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 24 00:16:32.312: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715421786, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715421786, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715421786, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715421786, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 24 00:16:34.314: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715421786, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715421786, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715421786, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715421786, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 24 00:16:37.369: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Jan 24 00:16:37.385: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-3026-crds.webhook.example.com via the AdmissionRegistration API Jan 24 00:16:37.550: INFO: Waiting for webhook configuration to be ready... STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:16:38.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3213" for this suite. STEP: Destroying namespace "webhook-3213-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:14.167 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":278,"completed":105,"skipped":1607,"failed":0} SSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:16:38.393: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward api env vars Jan 24 00:16:38.519: INFO: Waiting up to 5m0s for pod "downward-api-23cc17e0-6497-4bda-b93e-003a7e9eae14" in namespace "downward-api-4173" to be "success or failure" Jan 24 00:16:38.526: INFO: Pod "downward-api-23cc17e0-6497-4bda-b93e-003a7e9eae14": Phase="Pending", Reason="", readiness=false. Elapsed: 6.795956ms Jan 24 00:16:40.536: INFO: Pod "downward-api-23cc17e0-6497-4bda-b93e-003a7e9eae14": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017552305s Jan 24 00:16:42.544: INFO: Pod "downward-api-23cc17e0-6497-4bda-b93e-003a7e9eae14": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025054884s Jan 24 00:16:44.552: INFO: Pod "downward-api-23cc17e0-6497-4bda-b93e-003a7e9eae14": Phase="Pending", Reason="", readiness=false. Elapsed: 6.033612655s Jan 24 00:16:46.563: INFO: Pod "downward-api-23cc17e0-6497-4bda-b93e-003a7e9eae14": Phase="Pending", Reason="", readiness=false. Elapsed: 8.044031846s Jan 24 00:16:48.568: INFO: Pod "downward-api-23cc17e0-6497-4bda-b93e-003a7e9eae14": Phase="Pending", Reason="", readiness=false. Elapsed: 10.049749813s Jan 24 00:16:50.577: INFO: Pod "downward-api-23cc17e0-6497-4bda-b93e-003a7e9eae14": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.058106604s STEP: Saw pod success Jan 24 00:16:50.577: INFO: Pod "downward-api-23cc17e0-6497-4bda-b93e-003a7e9eae14" satisfied condition "success or failure" Jan 24 00:16:50.581: INFO: Trying to get logs from node jerma-node pod downward-api-23cc17e0-6497-4bda-b93e-003a7e9eae14 container dapi-container: STEP: delete the pod Jan 24 00:16:50.652: INFO: Waiting for pod downward-api-23cc17e0-6497-4bda-b93e-003a7e9eae14 to disappear Jan 24 00:16:50.655: INFO: Pod downward-api-23cc17e0-6497-4bda-b93e-003a7e9eae14 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:16:50.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4173" for this suite. • [SLOW TEST:12.273 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":278,"completed":106,"skipped":1616,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:16:50.666: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating secret with name s-test-opt-del-f9871555-e98a-4ada-b023-927868fc471d STEP: Creating secret with name s-test-opt-upd-9622f06f-8fa5-4ef9-8222-23cfa12de126 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-f9871555-e98a-4ada-b023-927868fc471d STEP: Updating secret s-test-opt-upd-9622f06f-8fa5-4ef9-8222-23cfa12de126 STEP: Creating secret with name s-test-opt-create-8120a825-f01d-45cb-8e4b-29533018e481 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:18:26.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5184" for this suite. • [SLOW TEST:95.418 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":107,"skipped":1624,"failed":0} SSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:18:26.084: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name configmap-test-volume-fbf80562-54ba-4eb2-81ba-797f44b39dbb STEP: Creating a pod to test consume configMaps Jan 24 00:18:26.186: INFO: Waiting up to 5m0s for pod "pod-configmaps-2d2387e6-1038-4aec-b188-d3fe3711ff87" in namespace "configmap-8222" to be "success or failure" Jan 24 00:18:26.229: INFO: Pod "pod-configmaps-2d2387e6-1038-4aec-b188-d3fe3711ff87": Phase="Pending", Reason="", readiness=false. Elapsed: 43.421773ms Jan 24 00:18:28.236: INFO: Pod "pod-configmaps-2d2387e6-1038-4aec-b188-d3fe3711ff87": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049938821s Jan 24 00:18:30.242: INFO: Pod "pod-configmaps-2d2387e6-1038-4aec-b188-d3fe3711ff87": Phase="Pending", Reason="", readiness=false. Elapsed: 4.056134709s Jan 24 00:18:32.247: INFO: Pod "pod-configmaps-2d2387e6-1038-4aec-b188-d3fe3711ff87": Phase="Pending", Reason="", readiness=false. Elapsed: 6.061093348s Jan 24 00:18:34.269: INFO: Pod "pod-configmaps-2d2387e6-1038-4aec-b188-d3fe3711ff87": Phase="Pending", Reason="", readiness=false. Elapsed: 8.083883817s Jan 24 00:18:36.277: INFO: Pod "pod-configmaps-2d2387e6-1038-4aec-b188-d3fe3711ff87": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.091410156s STEP: Saw pod success Jan 24 00:18:36.277: INFO: Pod "pod-configmaps-2d2387e6-1038-4aec-b188-d3fe3711ff87" satisfied condition "success or failure" Jan 24 00:18:36.281: INFO: Trying to get logs from node jerma-node pod pod-configmaps-2d2387e6-1038-4aec-b188-d3fe3711ff87 container configmap-volume-test: STEP: delete the pod Jan 24 00:18:36.341: INFO: Waiting for pod pod-configmaps-2d2387e6-1038-4aec-b188-d3fe3711ff87 to disappear Jan 24 00:18:36.347: INFO: Pod pod-configmaps-2d2387e6-1038-4aec-b188-d3fe3711ff87 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:18:36.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8222" for this suite. • [SLOW TEST:10.331 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":108,"skipped":1630,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:18:36.416: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Jan 24 00:18:36.539: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-9413 I0124 00:18:36.555898 8 runners.go:189] Created replication controller with name: svc-latency-rc, namespace: svc-latency-9413, replica count: 1 I0124 00:18:37.606725 8 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0124 00:18:38.607147 8 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0124 00:18:39.607593 8 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0124 00:18:40.607961 8 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0124 00:18:41.608213 8 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0124 00:18:42.608623 8 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0124 00:18:43.609105 8 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0124 00:18:44.609660 8 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0124 00:18:45.610013 8 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 24 00:18:45.740: INFO: Created: latency-svc-blgkt Jan 24 00:18:45.747: INFO: Got endpoints: latency-svc-blgkt [36.935663ms] Jan 24 00:18:45.829: INFO: Created: latency-svc-cbt6h Jan 24 00:18:45.832: INFO: Got endpoints: latency-svc-cbt6h [85.181657ms] Jan 24 00:18:45.864: INFO: Created: latency-svc-lngfn Jan 24 00:18:45.883: INFO: Got endpoints: latency-svc-lngfn [135.632924ms] Jan 24 00:18:45.901: INFO: Created: latency-svc-njfdt Jan 24 00:18:45.956: INFO: Got endpoints: latency-svc-njfdt [208.687968ms] Jan 24 00:18:45.968: INFO: Created: latency-svc-rwr27 Jan 24 00:18:45.977: INFO: Got endpoints: latency-svc-rwr27 [229.067277ms] Jan 24 00:18:46.002: INFO: Created: latency-svc-jv6bw Jan 24 00:18:46.011: INFO: Got endpoints: latency-svc-jv6bw [263.86846ms] Jan 24 00:18:46.037: INFO: Created: latency-svc-gmhfp Jan 24 00:18:46.043: INFO: Got endpoints: latency-svc-gmhfp [294.930656ms] Jan 24 00:18:46.149: INFO: Created: latency-svc-d9bqv Jan 24 00:18:46.159: INFO: Got endpoints: latency-svc-d9bqv [412.024586ms] Jan 24 00:18:46.172: INFO: Created: latency-svc-tt42w Jan 24 00:18:46.178: INFO: Got endpoints: latency-svc-tt42w [429.911608ms] Jan 24 00:18:46.217: INFO: Created: latency-svc-zb2b8 Jan 24 00:18:46.271: INFO: Got endpoints: latency-svc-zb2b8 [523.235193ms] Jan 24 00:18:46.282: INFO: Created: latency-svc-45pxp Jan 24 00:18:46.295: INFO: Got endpoints: latency-svc-45pxp [547.442673ms] Jan 24 00:18:46.321: INFO: Created: latency-svc-trnw7 Jan 24 00:18:46.331: INFO: Got endpoints: latency-svc-trnw7 [583.463014ms] Jan 24 00:18:46.350: INFO: Created: latency-svc-wzzn9 Jan 24 00:18:46.361: INFO: Got endpoints: latency-svc-wzzn9 [613.31122ms] Jan 24 00:18:46.412: INFO: Created: latency-svc-56l4w Jan 24 00:18:46.416: INFO: Got endpoints: latency-svc-56l4w [668.430852ms] Jan 24 00:18:46.472: INFO: Created: latency-svc-hdtlv Jan 24 00:18:46.483: INFO: Got endpoints: latency-svc-hdtlv [734.566604ms] Jan 24 00:18:46.580: INFO: Created: latency-svc-xw874 Jan 24 00:18:46.614: INFO: Got endpoints: latency-svc-xw874 [865.794422ms] Jan 24 00:18:46.615: INFO: Created: latency-svc-79p95 Jan 24 00:18:46.656: INFO: Created: latency-svc-z5g7z Jan 24 00:18:46.657: INFO: Got endpoints: latency-svc-79p95 [824.340836ms] Jan 24 00:18:46.788: INFO: Got endpoints: latency-svc-z5g7z [904.422673ms] Jan 24 00:18:46.796: INFO: Created: latency-svc-n5lzl Jan 24 00:18:46.826: INFO: Got endpoints: latency-svc-n5lzl [869.398917ms] Jan 24 00:18:46.857: INFO: Created: latency-svc-lt6np Jan 24 00:18:46.875: INFO: Got endpoints: latency-svc-lt6np [898.19693ms] Jan 24 00:18:46.881: INFO: Created: latency-svc-tx49z Jan 24 00:18:46.976: INFO: Got endpoints: latency-svc-tx49z [964.24809ms] Jan 24 00:18:46.984: INFO: Created: latency-svc-557n4 Jan 24 00:18:46.989: INFO: Got endpoints: latency-svc-557n4 [946.720104ms] Jan 24 00:18:47.079: INFO: Created: latency-svc-whd4d Jan 24 00:18:47.148: INFO: Got endpoints: latency-svc-whd4d [989.232938ms] Jan 24 00:18:47.164: INFO: Created: latency-svc-lrplm Jan 24 00:18:47.175: INFO: Got endpoints: latency-svc-lrplm [997.398201ms] Jan 24 00:18:47.213: INFO: Created: latency-svc-9jb4l Jan 24 00:18:47.238: INFO: Got endpoints: latency-svc-9jb4l [966.956942ms] Jan 24 00:18:47.312: INFO: Created: latency-svc-scmwp Jan 24 00:18:47.319: INFO: Got endpoints: latency-svc-scmwp [1.024004961s] Jan 24 00:18:47.477: INFO: Created: latency-svc-wkwxk Jan 24 00:18:47.521: INFO: Created: latency-svc-bng8p Jan 24 00:18:47.523: INFO: Got endpoints: latency-svc-wkwxk [1.192127048s] Jan 24 00:18:47.535: INFO: Got endpoints: latency-svc-bng8p [1.173688202s] Jan 24 00:18:47.616: INFO: Created: latency-svc-w22hc Jan 24 00:18:47.652: INFO: Got endpoints: latency-svc-w22hc [1.235148788s] Jan 24 00:18:47.656: INFO: Created: latency-svc-6sqxl Jan 24 00:18:47.778: INFO: Got endpoints: latency-svc-6sqxl [1.295404815s] Jan 24 00:18:47.786: INFO: Created: latency-svc-wgbd5 Jan 24 00:18:47.799: INFO: Got endpoints: latency-svc-wgbd5 [1.185518542s] Jan 24 00:18:47.826: INFO: Created: latency-svc-4vcxj Jan 24 00:18:47.830: INFO: Got endpoints: latency-svc-4vcxj [1.173302715s] Jan 24 00:18:47.858: INFO: Created: latency-svc-whskd Jan 24 00:18:47.932: INFO: Got endpoints: latency-svc-whskd [1.143877969s] Jan 24 00:18:47.941: INFO: Created: latency-svc-2fj42 Jan 24 00:18:47.951: INFO: Got endpoints: latency-svc-2fj42 [1.124848648s] Jan 24 00:18:47.973: INFO: Created: latency-svc-wgk84 Jan 24 00:18:47.984: INFO: Got endpoints: latency-svc-wgk84 [1.109206736s] Jan 24 00:18:48.020: INFO: Created: latency-svc-vmzkq Jan 24 00:18:48.162: INFO: Got endpoints: latency-svc-vmzkq [1.185714745s] Jan 24 00:18:48.171: INFO: Created: latency-svc-9dw9l Jan 24 00:18:48.176: INFO: Got endpoints: latency-svc-9dw9l [1.186045839s] Jan 24 00:18:48.217: INFO: Created: latency-svc-lj4rr Jan 24 00:18:48.225: INFO: Got endpoints: latency-svc-lj4rr [1.076042769s] Jan 24 00:18:48.260: INFO: Created: latency-svc-zr4k2 Jan 24 00:18:48.314: INFO: Got endpoints: latency-svc-zr4k2 [1.138913791s] Jan 24 00:18:48.339: INFO: Created: latency-svc-wd9jl Jan 24 00:18:48.351: INFO: Got endpoints: latency-svc-wd9jl [1.113543115s] Jan 24 00:18:48.378: INFO: Created: latency-svc-qcbng Jan 24 00:18:48.379: INFO: Got endpoints: latency-svc-qcbng [1.059927185s] Jan 24 00:18:48.511: INFO: Created: latency-svc-hvp6c Jan 24 00:18:48.511: INFO: Got endpoints: latency-svc-hvp6c [987.527449ms] Jan 24 00:18:48.726: INFO: Created: latency-svc-45mhq Jan 24 00:18:48.752: INFO: Got endpoints: latency-svc-45mhq [1.21630909s] Jan 24 00:18:48.878: INFO: Created: latency-svc-n5sq6 Jan 24 00:18:48.912: INFO: Got endpoints: latency-svc-n5sq6 [1.259994798s] Jan 24 00:18:48.918: INFO: Created: latency-svc-f8ztx Jan 24 00:18:48.922: INFO: Got endpoints: latency-svc-f8ztx [1.143796945s] Jan 24 00:18:48.963: INFO: Created: latency-svc-zm4mb Jan 24 00:18:49.090: INFO: Got endpoints: latency-svc-zm4mb [1.290130427s] Jan 24 00:18:49.125: INFO: Created: latency-svc-pd697 Jan 24 00:18:49.141: INFO: Got endpoints: latency-svc-pd697 [1.310619787s] Jan 24 00:18:49.144: INFO: Created: latency-svc-dwmvc Jan 24 00:18:49.163: INFO: Got endpoints: latency-svc-dwmvc [1.231201869s] Jan 24 00:18:49.365: INFO: Created: latency-svc-zfrb9 Jan 24 00:18:49.398: INFO: Got endpoints: latency-svc-zfrb9 [1.446588258s] Jan 24 00:18:49.403: INFO: Created: latency-svc-vd5qt Jan 24 00:18:49.411: INFO: Got endpoints: latency-svc-vd5qt [1.426573249s] Jan 24 00:18:49.584: INFO: Created: latency-svc-l2zjh Jan 24 00:18:49.607: INFO: Got endpoints: latency-svc-l2zjh [1.444959261s] Jan 24 00:18:49.631: INFO: Created: latency-svc-mvsvf Jan 24 00:18:49.645: INFO: Got endpoints: latency-svc-mvsvf [1.469505998s] Jan 24 00:18:49.668: INFO: Created: latency-svc-5tkjt Jan 24 00:18:49.671: INFO: Got endpoints: latency-svc-5tkjt [1.446029641s] Jan 24 00:18:49.783: INFO: Created: latency-svc-57w5x Jan 24 00:18:49.806: INFO: Got endpoints: latency-svc-57w5x [1.491495129s] Jan 24 00:18:49.840: INFO: Created: latency-svc-hh4cw Jan 24 00:18:49.845: INFO: Got endpoints: latency-svc-hh4cw [1.493129211s] Jan 24 00:18:49.873: INFO: Created: latency-svc-54tld Jan 24 00:18:49.880: INFO: Got endpoints: latency-svc-54tld [1.500353344s] Jan 24 00:18:49.942: INFO: Created: latency-svc-xgrf4 Jan 24 00:18:49.957: INFO: Got endpoints: latency-svc-xgrf4 [1.446148636s] Jan 24 00:18:50.029: INFO: Created: latency-svc-r27xr Jan 24 00:18:50.124: INFO: Got endpoints: latency-svc-r27xr [1.372756771s] Jan 24 00:18:50.131: INFO: Created: latency-svc-sq6w6 Jan 24 00:18:50.153: INFO: Got endpoints: latency-svc-sq6w6 [1.241475043s] Jan 24 00:18:50.176: INFO: Created: latency-svc-2sf99 Jan 24 00:18:50.204: INFO: Got endpoints: latency-svc-2sf99 [1.281989351s] Jan 24 00:18:50.217: INFO: Created: latency-svc-n59nt Jan 24 00:18:50.347: INFO: Got endpoints: latency-svc-n59nt [1.257530674s] Jan 24 00:18:50.351: INFO: Created: latency-svc-mpjd6 Jan 24 00:18:50.353: INFO: Got endpoints: latency-svc-mpjd6 [1.211946844s] Jan 24 00:18:50.392: INFO: Created: latency-svc-tkmsj Jan 24 00:18:50.420: INFO: Got endpoints: latency-svc-tkmsj [1.257431684s] Jan 24 00:18:50.541: INFO: Created: latency-svc-q5c2c Jan 24 00:18:50.541: INFO: Got endpoints: latency-svc-q5c2c [1.143862006s] Jan 24 00:18:50.632: INFO: Created: latency-svc-tf6sj Jan 24 00:18:50.720: INFO: Got endpoints: latency-svc-tf6sj [1.30904856s] Jan 24 00:18:50.724: INFO: Created: latency-svc-mdclj Jan 24 00:18:50.736: INFO: Got endpoints: latency-svc-mdclj [1.129150528s] Jan 24 00:18:50.759: INFO: Created: latency-svc-mpcjt Jan 24 00:18:50.771: INFO: Got endpoints: latency-svc-mpcjt [1.125509703s] Jan 24 00:18:50.788: INFO: Created: latency-svc-qx4lg Jan 24 00:18:50.807: INFO: Got endpoints: latency-svc-qx4lg [1.135734753s] Jan 24 00:18:50.879: INFO: Created: latency-svc-c564m Jan 24 00:18:50.917: INFO: Created: latency-svc-p2msv Jan 24 00:18:50.919: INFO: Got endpoints: latency-svc-c564m [1.112893503s] Jan 24 00:18:50.941: INFO: Got endpoints: latency-svc-p2msv [1.096088704s] Jan 24 00:18:50.945: INFO: Created: latency-svc-zrc9n Jan 24 00:18:50.950: INFO: Got endpoints: latency-svc-zrc9n [1.070074652s] Jan 24 00:18:51.212: INFO: Created: latency-svc-46cxm Jan 24 00:18:51.222: INFO: Got endpoints: latency-svc-46cxm [1.264815158s] Jan 24 00:18:51.270: INFO: Created: latency-svc-plm86 Jan 24 00:18:51.285: INFO: Got endpoints: latency-svc-plm86 [1.160898842s] Jan 24 00:18:51.438: INFO: Created: latency-svc-lnqvh Jan 24 00:18:51.462: INFO: Got endpoints: latency-svc-lnqvh [1.3086191s] Jan 24 00:18:51.495: INFO: Created: latency-svc-jhkjv Jan 24 00:18:51.500: INFO: Got endpoints: latency-svc-jhkjv [1.295858617s] Jan 24 00:18:51.520: INFO: Created: latency-svc-2ktq5 Jan 24 00:18:51.665: INFO: Got endpoints: latency-svc-2ktq5 [1.317654495s] Jan 24 00:18:51.672: INFO: Created: latency-svc-cqdl7 Jan 24 00:18:51.682: INFO: Got endpoints: latency-svc-cqdl7 [1.328867812s] Jan 24 00:18:51.815: INFO: Created: latency-svc-lhc57 Jan 24 00:18:51.817: INFO: Got endpoints: latency-svc-lhc57 [1.396530096s] Jan 24 00:18:51.881: INFO: Created: latency-svc-6cpsg Jan 24 00:18:51.895: INFO: Got endpoints: latency-svc-6cpsg [1.352980284s] Jan 24 00:18:51.903: INFO: Created: latency-svc-scgw4 Jan 24 00:18:51.909: INFO: Got endpoints: latency-svc-scgw4 [1.188817037s] Jan 24 00:18:51.980: INFO: Created: latency-svc-z97zk Jan 24 00:18:52.014: INFO: Got endpoints: latency-svc-z97zk [1.278373704s] Jan 24 00:18:52.016: INFO: Created: latency-svc-wm228 Jan 24 00:18:52.042: INFO: Got endpoints: latency-svc-wm228 [1.270615913s] Jan 24 00:18:52.080: INFO: Created: latency-svc-sflzb Jan 24 00:18:52.124: INFO: Got endpoints: latency-svc-sflzb [1.316913082s] Jan 24 00:18:52.134: INFO: Created: latency-svc-8dvlk Jan 24 00:18:52.148: INFO: Got endpoints: latency-svc-8dvlk [1.22890626s] Jan 24 00:18:52.216: INFO: Created: latency-svc-w8gpm Jan 24 00:18:52.299: INFO: Got endpoints: latency-svc-w8gpm [1.357789029s] Jan 24 00:18:52.304: INFO: Created: latency-svc-vrkrn Jan 24 00:18:52.316: INFO: Got endpoints: latency-svc-vrkrn [1.36582036s] Jan 24 00:18:52.336: INFO: Created: latency-svc-mbcw7 Jan 24 00:18:52.345: INFO: Got endpoints: latency-svc-mbcw7 [1.122425271s] Jan 24 00:18:52.365: INFO: Created: latency-svc-wq89z Jan 24 00:18:52.375: INFO: Got endpoints: latency-svc-wq89z [1.089198345s] Jan 24 00:18:52.399: INFO: Created: latency-svc-rsw5l Jan 24 00:18:52.478: INFO: Got endpoints: latency-svc-rsw5l [1.015553303s] Jan 24 00:18:52.484: INFO: Created: latency-svc-cbwzh Jan 24 00:18:52.501: INFO: Got endpoints: latency-svc-cbwzh [1.000142144s] Jan 24 00:18:52.529: INFO: Created: latency-svc-npmxh Jan 24 00:18:52.656: INFO: Got endpoints: latency-svc-npmxh [990.945448ms] Jan 24 00:18:52.667: INFO: Created: latency-svc-kkggs Jan 24 00:18:52.695: INFO: Got endpoints: latency-svc-kkggs [1.012975996s] Jan 24 00:18:52.696: INFO: Created: latency-svc-cl7t9 Jan 24 00:18:52.703: INFO: Got endpoints: latency-svc-cl7t9 [885.96821ms] Jan 24 00:18:52.736: INFO: Created: latency-svc-t5lp5 Jan 24 00:18:52.744: INFO: Got endpoints: latency-svc-t5lp5 [87.482095ms] Jan 24 00:18:52.799: INFO: Created: latency-svc-t8hrj Jan 24 00:18:52.804: INFO: Got endpoints: latency-svc-t8hrj [909.156017ms] Jan 24 00:18:52.832: INFO: Created: latency-svc-7xjf7 Jan 24 00:18:52.864: INFO: Got endpoints: latency-svc-7xjf7 [954.780725ms] Jan 24 00:18:52.891: INFO: Created: latency-svc-5mf28 Jan 24 00:18:52.957: INFO: Got endpoints: latency-svc-5mf28 [942.693298ms] Jan 24 00:18:52.977: INFO: Created: latency-svc-mwr2m Jan 24 00:18:52.988: INFO: Got endpoints: latency-svc-mwr2m [945.792056ms] Jan 24 00:18:53.100: INFO: Created: latency-svc-ng28f Jan 24 00:18:53.124: INFO: Got endpoints: latency-svc-ng28f [1.000114528s] Jan 24 00:18:53.149: INFO: Created: latency-svc-s89w7 Jan 24 00:18:53.153: INFO: Got endpoints: latency-svc-s89w7 [1.004836794s] Jan 24 00:18:53.184: INFO: Created: latency-svc-8dc44 Jan 24 00:18:53.203: INFO: Got endpoints: latency-svc-8dc44 [903.838635ms] Jan 24 00:18:53.292: INFO: Created: latency-svc-g24bh Jan 24 00:18:53.292: INFO: Got endpoints: latency-svc-g24bh [975.776401ms] Jan 24 00:18:53.334: INFO: Created: latency-svc-b6nxj Jan 24 00:18:53.342: INFO: Got endpoints: latency-svc-b6nxj [997.330046ms] Jan 24 00:18:53.378: INFO: Created: latency-svc-d8v8l Jan 24 00:18:53.383: INFO: Got endpoints: latency-svc-d8v8l [1.00806242s] Jan 24 00:18:53.424: INFO: Created: latency-svc-2m955 Jan 24 00:18:53.490: INFO: Got endpoints: latency-svc-2m955 [1.011799106s] Jan 24 00:18:53.519: INFO: Created: latency-svc-p9rbb Jan 24 00:18:53.563: INFO: Got endpoints: latency-svc-p9rbb [1.062738728s] Jan 24 00:18:53.571: INFO: Created: latency-svc-tqdrx Jan 24 00:18:53.589: INFO: Got endpoints: latency-svc-tqdrx [893.191671ms] Jan 24 00:18:53.611: INFO: Created: latency-svc-79j8p Jan 24 00:18:53.630: INFO: Got endpoints: latency-svc-79j8p [927.127631ms] Jan 24 00:18:53.731: INFO: Created: latency-svc-qbkg9 Jan 24 00:18:53.763: INFO: Got endpoints: latency-svc-qbkg9 [1.018839594s] Jan 24 00:18:53.763: INFO: Created: latency-svc-b6vgd Jan 24 00:18:53.792: INFO: Got endpoints: latency-svc-b6vgd [987.340434ms] Jan 24 00:18:53.829: INFO: Created: latency-svc-wh2pp Jan 24 00:18:53.878: INFO: Got endpoints: latency-svc-wh2pp [1.013545015s] Jan 24 00:18:53.944: INFO: Created: latency-svc-gfxn7 Jan 24 00:18:53.953: INFO: Got endpoints: latency-svc-gfxn7 [996.012563ms] Jan 24 00:18:53.970: INFO: Created: latency-svc-8vktg Jan 24 00:18:54.027: INFO: Got endpoints: latency-svc-8vktg [1.039476865s] Jan 24 00:18:54.054: INFO: Created: latency-svc-kbpft Jan 24 00:18:54.097: INFO: Created: latency-svc-tf66p Jan 24 00:18:54.098: INFO: Got endpoints: latency-svc-kbpft [973.901582ms] Jan 24 00:18:54.104: INFO: Got endpoints: latency-svc-tf66p [950.342639ms] Jan 24 00:18:54.203: INFO: Created: latency-svc-s5m4w Jan 24 00:18:54.232: INFO: Got endpoints: latency-svc-s5m4w [1.028795195s] Jan 24 00:18:54.238: INFO: Created: latency-svc-92fzq Jan 24 00:18:54.246: INFO: Got endpoints: latency-svc-92fzq [954.000207ms] Jan 24 00:18:54.285: INFO: Created: latency-svc-wbkmk Jan 24 00:18:54.299: INFO: Got endpoints: latency-svc-wbkmk [956.555871ms] Jan 24 00:18:54.403: INFO: Created: latency-svc-m8ktf Jan 24 00:18:54.414: INFO: Got endpoints: latency-svc-m8ktf [1.031318502s] Jan 24 00:18:54.445: INFO: Created: latency-svc-r59ls Jan 24 00:18:54.456: INFO: Got endpoints: latency-svc-r59ls [966.206842ms] Jan 24 00:18:54.485: INFO: Created: latency-svc-9b65n Jan 24 00:18:54.492: INFO: Got endpoints: latency-svc-9b65n [928.005978ms] Jan 24 00:18:54.577: INFO: Created: latency-svc-v2kxq Jan 24 00:18:54.625: INFO: Created: latency-svc-6vczn Jan 24 00:18:54.625: INFO: Got endpoints: latency-svc-v2kxq [1.036506544s] Jan 24 00:18:54.642: INFO: Got endpoints: latency-svc-6vczn [1.011359273s] Jan 24 00:18:54.764: INFO: Created: latency-svc-cghck Jan 24 00:18:54.783: INFO: Got endpoints: latency-svc-cghck [1.019928728s] Jan 24 00:18:54.946: INFO: Created: latency-svc-9228s Jan 24 00:18:54.953: INFO: Got endpoints: latency-svc-9228s [1.161140858s] Jan 24 00:18:55.107: INFO: Created: latency-svc-sx87b Jan 24 00:18:55.115: INFO: Got endpoints: latency-svc-sx87b [1.237312336s] Jan 24 00:18:55.162: INFO: Created: latency-svc-rvgg4 Jan 24 00:18:55.183: INFO: Got endpoints: latency-svc-rvgg4 [1.229200258s] Jan 24 00:18:55.290: INFO: Created: latency-svc-2h2lh Jan 24 00:18:55.305: INFO: Got endpoints: latency-svc-2h2lh [1.277846282s] Jan 24 00:18:55.350: INFO: Created: latency-svc-j7xrh Jan 24 00:18:55.365: INFO: Got endpoints: latency-svc-j7xrh [1.26696494s] Jan 24 00:18:55.473: INFO: Created: latency-svc-f4j2m Jan 24 00:18:55.491: INFO: Got endpoints: latency-svc-f4j2m [1.387123975s] Jan 24 00:18:55.526: INFO: Created: latency-svc-7wv96 Jan 24 00:18:55.532: INFO: Got endpoints: latency-svc-7wv96 [1.300637172s] Jan 24 00:18:55.562: INFO: Created: latency-svc-hw9c7 Jan 24 00:18:55.730: INFO: Got endpoints: latency-svc-hw9c7 [1.484082311s] Jan 24 00:18:55.737: INFO: Created: latency-svc-xkzcb Jan 24 00:18:55.783: INFO: Got endpoints: latency-svc-xkzcb [1.483608668s] Jan 24 00:18:55.806: INFO: Created: latency-svc-fwlkg Jan 24 00:18:55.811: INFO: Got endpoints: latency-svc-fwlkg [1.397288594s] Jan 24 00:18:55.924: INFO: Created: latency-svc-nv8mr Jan 24 00:18:55.929: INFO: Got endpoints: latency-svc-nv8mr [1.473163269s] Jan 24 00:18:55.997: INFO: Created: latency-svc-7f4sw Jan 24 00:18:56.001: INFO: Got endpoints: latency-svc-7f4sw [1.509582855s] Jan 24 00:18:56.021: INFO: Created: latency-svc-5vnw9 Jan 24 00:18:56.052: INFO: Got endpoints: latency-svc-5vnw9 [1.42582237s] Jan 24 00:18:56.081: INFO: Created: latency-svc-t7mk6 Jan 24 00:18:56.098: INFO: Got endpoints: latency-svc-t7mk6 [1.456332603s] Jan 24 00:18:56.215: INFO: Created: latency-svc-znvx6 Jan 24 00:18:56.244: INFO: Created: latency-svc-v5wks Jan 24 00:18:56.248: INFO: Got endpoints: latency-svc-znvx6 [1.464664156s] Jan 24 00:18:56.257: INFO: Got endpoints: latency-svc-v5wks [1.304028568s] Jan 24 00:18:56.280: INFO: Created: latency-svc-kxkjb Jan 24 00:18:56.291: INFO: Got endpoints: latency-svc-kxkjb [1.175314675s] Jan 24 00:18:56.313: INFO: Created: latency-svc-mztv8 Jan 24 00:18:56.357: INFO: Got endpoints: latency-svc-mztv8 [1.174110423s] Jan 24 00:18:56.400: INFO: Created: latency-svc-6dvn4 Jan 24 00:18:56.421: INFO: Got endpoints: latency-svc-6dvn4 [1.115925669s] Jan 24 00:18:56.445: INFO: Created: latency-svc-h9bnh Jan 24 00:18:56.498: INFO: Got endpoints: latency-svc-h9bnh [1.132796132s] Jan 24 00:18:56.506: INFO: Created: latency-svc-ldffl Jan 24 00:18:56.507: INFO: Got endpoints: latency-svc-ldffl [1.016343976s] Jan 24 00:18:56.596: INFO: Created: latency-svc-dv9tg Jan 24 00:18:56.663: INFO: Got endpoints: latency-svc-dv9tg [1.130736597s] Jan 24 00:18:56.703: INFO: Created: latency-svc-cfpgq Jan 24 00:18:56.713: INFO: Got endpoints: latency-svc-cfpgq [982.626388ms] Jan 24 00:18:56.754: INFO: Created: latency-svc-9fqpt Jan 24 00:18:56.760: INFO: Got endpoints: latency-svc-9fqpt [976.813432ms] Jan 24 00:18:56.821: INFO: Created: latency-svc-f5hkp Jan 24 00:18:56.836: INFO: Got endpoints: latency-svc-f5hkp [1.024257884s] Jan 24 00:18:56.855: INFO: Created: latency-svc-bhw67 Jan 24 00:18:56.866: INFO: Got endpoints: latency-svc-bhw67 [936.935538ms] Jan 24 00:18:56.901: INFO: Created: latency-svc-pxfg4 Jan 24 00:18:56.914: INFO: Got endpoints: latency-svc-pxfg4 [912.963917ms] Jan 24 00:18:56.949: INFO: Created: latency-svc-hgb4p Jan 24 00:18:56.983: INFO: Got endpoints: latency-svc-hgb4p [931.026328ms] Jan 24 00:18:57.130: INFO: Created: latency-svc-2f5kn Jan 24 00:18:57.164: INFO: Created: latency-svc-84c89 Jan 24 00:18:57.164: INFO: Got endpoints: latency-svc-2f5kn [1.065756807s] Jan 24 00:18:57.185: INFO: Got endpoints: latency-svc-84c89 [937.286367ms] Jan 24 00:18:57.205: INFO: Created: latency-svc-pd6vm Jan 24 00:18:57.210: INFO: Got endpoints: latency-svc-pd6vm [952.968952ms] Jan 24 00:18:57.283: INFO: Created: latency-svc-rwcj5 Jan 24 00:18:57.284: INFO: Got endpoints: latency-svc-rwcj5 [993.285325ms] Jan 24 00:18:57.313: INFO: Created: latency-svc-h7svh Jan 24 00:18:57.321: INFO: Got endpoints: latency-svc-h7svh [963.751209ms] Jan 24 00:18:57.337: INFO: Created: latency-svc-7ttjk Jan 24 00:18:57.351: INFO: Got endpoints: latency-svc-7ttjk [930.05264ms] Jan 24 00:18:57.377: INFO: Created: latency-svc-mv8nj Jan 24 00:18:57.556: INFO: Got endpoints: latency-svc-mv8nj [1.058130212s] Jan 24 00:18:57.565: INFO: Created: latency-svc-g2rrq Jan 24 00:18:57.576: INFO: Got endpoints: latency-svc-g2rrq [1.068227308s] Jan 24 00:18:57.643: INFO: Created: latency-svc-jcb75 Jan 24 00:18:57.654: INFO: Got endpoints: latency-svc-jcb75 [991.019945ms] Jan 24 00:18:57.747: INFO: Created: latency-svc-mg4b8 Jan 24 00:18:57.777: INFO: Got endpoints: latency-svc-mg4b8 [1.064631061s] Jan 24 00:18:57.778: INFO: Created: latency-svc-z87w7 Jan 24 00:18:57.784: INFO: Got endpoints: latency-svc-z87w7 [1.024785863s] Jan 24 00:18:57.800: INFO: Created: latency-svc-762zc Jan 24 00:18:57.809: INFO: Got endpoints: latency-svc-762zc [972.544077ms] Jan 24 00:18:57.826: INFO: Created: latency-svc-j2mmn Jan 24 00:18:57.829: INFO: Got endpoints: latency-svc-j2mmn [962.369823ms] Jan 24 00:18:57.904: INFO: Created: latency-svc-gscd2 Jan 24 00:18:58.447: INFO: Got endpoints: latency-svc-gscd2 [1.533042644s] Jan 24 00:18:58.458: INFO: Created: latency-svc-2hbkx Jan 24 00:18:58.466: INFO: Got endpoints: latency-svc-2hbkx [1.483520752s] Jan 24 00:18:58.497: INFO: Created: latency-svc-stcbm Jan 24 00:18:58.502: INFO: Got endpoints: latency-svc-stcbm [1.338192572s] Jan 24 00:18:58.537: INFO: Created: latency-svc-6b2vb Jan 24 00:18:58.610: INFO: Got endpoints: latency-svc-6b2vb [1.424490279s] Jan 24 00:18:58.617: INFO: Created: latency-svc-x2fcw Jan 24 00:18:58.634: INFO: Got endpoints: latency-svc-x2fcw [1.423255638s] Jan 24 00:18:58.653: INFO: Created: latency-svc-8x89t Jan 24 00:18:58.666: INFO: Got endpoints: latency-svc-8x89t [1.382055382s] Jan 24 00:18:58.698: INFO: Created: latency-svc-xqg5d Jan 24 00:18:58.839: INFO: Got endpoints: latency-svc-xqg5d [1.517900588s] Jan 24 00:18:58.865: INFO: Created: latency-svc-f456j Jan 24 00:18:58.888: INFO: Got endpoints: latency-svc-f456j [1.536788541s] Jan 24 00:18:58.918: INFO: Created: latency-svc-dglvt Jan 24 00:18:58.925: INFO: Got endpoints: latency-svc-dglvt [1.368569965s] Jan 24 00:18:59.043: INFO: Created: latency-svc-f8bsc Jan 24 00:18:59.083: INFO: Got endpoints: latency-svc-f8bsc [1.507334869s] Jan 24 00:18:59.225: INFO: Created: latency-svc-2k9fn Jan 24 00:18:59.277: INFO: Got endpoints: latency-svc-2k9fn [1.622941853s] Jan 24 00:18:59.284: INFO: Created: latency-svc-jbn5n Jan 24 00:18:59.297: INFO: Got endpoints: latency-svc-jbn5n [1.519537872s] Jan 24 00:18:59.406: INFO: Created: latency-svc-fznfd Jan 24 00:18:59.434: INFO: Created: latency-svc-kvtpt Jan 24 00:18:59.434: INFO: Got endpoints: latency-svc-fznfd [1.649979921s] Jan 24 00:18:59.453: INFO: Got endpoints: latency-svc-kvtpt [1.644021476s] Jan 24 00:18:59.591: INFO: Created: latency-svc-q5nzt Jan 24 00:18:59.601: INFO: Got endpoints: latency-svc-q5nzt [1.771950292s] Jan 24 00:18:59.659: INFO: Created: latency-svc-mtpz7 Jan 24 00:18:59.668: INFO: Got endpoints: latency-svc-mtpz7 [1.220400502s] Jan 24 00:18:59.746: INFO: Created: latency-svc-tz68t Jan 24 00:18:59.763: INFO: Got endpoints: latency-svc-tz68t [1.296486102s] Jan 24 00:18:59.786: INFO: Created: latency-svc-qxvr2 Jan 24 00:18:59.799: INFO: Got endpoints: latency-svc-qxvr2 [1.296650561s] Jan 24 00:18:59.819: INFO: Created: latency-svc-c7xrb Jan 24 00:18:59.819: INFO: Got endpoints: latency-svc-c7xrb [1.209017164s] Jan 24 00:18:59.841: INFO: Created: latency-svc-stvkv Jan 24 00:18:59.914: INFO: Got endpoints: latency-svc-stvkv [1.280053225s] Jan 24 00:18:59.924: INFO: Created: latency-svc-w7dqv Jan 24 00:18:59.938: INFO: Got endpoints: latency-svc-w7dqv [1.27205451s] Jan 24 00:19:00.073: INFO: Created: latency-svc-jw768 Jan 24 00:19:00.081: INFO: Got endpoints: latency-svc-jw768 [1.241851113s] Jan 24 00:19:00.106: INFO: Created: latency-svc-qk842 Jan 24 00:19:00.126: INFO: Got endpoints: latency-svc-qk842 [1.237633605s] Jan 24 00:19:00.128: INFO: Created: latency-svc-85d9p Jan 24 00:19:00.156: INFO: Got endpoints: latency-svc-85d9p [1.231581081s] Jan 24 00:19:00.231: INFO: Created: latency-svc-xxp7j Jan 24 00:19:00.231: INFO: Got endpoints: latency-svc-xxp7j [1.148175032s] Jan 24 00:19:00.284: INFO: Created: latency-svc-5vbtb Jan 24 00:19:00.285: INFO: Got endpoints: latency-svc-5vbtb [1.007512256s] Jan 24 00:19:00.376: INFO: Created: latency-svc-fjk2l Jan 24 00:19:00.384: INFO: Got endpoints: latency-svc-fjk2l [1.087418765s] Jan 24 00:19:00.421: INFO: Created: latency-svc-l4w79 Jan 24 00:19:00.422: INFO: Got endpoints: latency-svc-l4w79 [987.331829ms] Jan 24 00:19:00.598: INFO: Created: latency-svc-lwtc8 Jan 24 00:19:00.617: INFO: Got endpoints: latency-svc-lwtc8 [1.16383248s] Jan 24 00:19:00.619: INFO: Created: latency-svc-qpccr Jan 24 00:19:00.632: INFO: Got endpoints: latency-svc-qpccr [1.03134676s] Jan 24 00:19:00.678: INFO: Created: latency-svc-fqnbq Jan 24 00:19:00.770: INFO: Got endpoints: latency-svc-fqnbq [1.102446888s] Jan 24 00:19:00.812: INFO: Created: latency-svc-tcq28 Jan 24 00:19:00.824: INFO: Got endpoints: latency-svc-tcq28 [1.060601702s] Jan 24 00:19:00.853: INFO: Created: latency-svc-f4bn7 Jan 24 00:19:00.860: INFO: Got endpoints: latency-svc-f4bn7 [1.061036468s] Jan 24 00:19:00.935: INFO: Created: latency-svc-scp4g Jan 24 00:19:01.007: INFO: Created: latency-svc-fwkjd Jan 24 00:19:01.007: INFO: Got endpoints: latency-svc-scp4g [1.188546046s] Jan 24 00:19:01.075: INFO: Got endpoints: latency-svc-fwkjd [1.158012839s] Jan 24 00:19:01.077: INFO: Created: latency-svc-nlz5r Jan 24 00:19:01.081: INFO: Got endpoints: latency-svc-nlz5r [1.142668779s] Jan 24 00:19:01.081: INFO: Latencies: [85.181657ms 87.482095ms 135.632924ms 208.687968ms 229.067277ms 263.86846ms 294.930656ms 412.024586ms 429.911608ms 523.235193ms 547.442673ms 583.463014ms 613.31122ms 668.430852ms 734.566604ms 824.340836ms 865.794422ms 869.398917ms 885.96821ms 893.191671ms 898.19693ms 903.838635ms 904.422673ms 909.156017ms 912.963917ms 927.127631ms 928.005978ms 930.05264ms 931.026328ms 936.935538ms 937.286367ms 942.693298ms 945.792056ms 946.720104ms 950.342639ms 952.968952ms 954.000207ms 954.780725ms 956.555871ms 962.369823ms 963.751209ms 964.24809ms 966.206842ms 966.956942ms 972.544077ms 973.901582ms 975.776401ms 976.813432ms 982.626388ms 987.331829ms 987.340434ms 987.527449ms 989.232938ms 990.945448ms 991.019945ms 993.285325ms 996.012563ms 997.330046ms 997.398201ms 1.000114528s 1.000142144s 1.004836794s 1.007512256s 1.00806242s 1.011359273s 1.011799106s 1.012975996s 1.013545015s 1.015553303s 1.016343976s 1.018839594s 1.019928728s 1.024004961s 1.024257884s 1.024785863s 1.028795195s 1.031318502s 1.03134676s 1.036506544s 1.039476865s 1.058130212s 1.059927185s 1.060601702s 1.061036468s 1.062738728s 1.064631061s 1.065756807s 1.068227308s 1.070074652s 1.076042769s 1.087418765s 1.089198345s 1.096088704s 1.102446888s 1.109206736s 1.112893503s 1.113543115s 1.115925669s 1.122425271s 1.124848648s 1.125509703s 1.129150528s 1.130736597s 1.132796132s 1.135734753s 1.138913791s 1.142668779s 1.143796945s 1.143862006s 1.143877969s 1.148175032s 1.158012839s 1.160898842s 1.161140858s 1.16383248s 1.173302715s 1.173688202s 1.174110423s 1.175314675s 1.185518542s 1.185714745s 1.186045839s 1.188546046s 1.188817037s 1.192127048s 1.209017164s 1.211946844s 1.21630909s 1.220400502s 1.22890626s 1.229200258s 1.231201869s 1.231581081s 1.235148788s 1.237312336s 1.237633605s 1.241475043s 1.241851113s 1.257431684s 1.257530674s 1.259994798s 1.264815158s 1.26696494s 1.270615913s 1.27205451s 1.277846282s 1.278373704s 1.280053225s 1.281989351s 1.290130427s 1.295404815s 1.295858617s 1.296486102s 1.296650561s 1.300637172s 1.304028568s 1.3086191s 1.30904856s 1.310619787s 1.316913082s 1.317654495s 1.328867812s 1.338192572s 1.352980284s 1.357789029s 1.36582036s 1.368569965s 1.372756771s 1.382055382s 1.387123975s 1.396530096s 1.397288594s 1.423255638s 1.424490279s 1.42582237s 1.426573249s 1.444959261s 1.446029641s 1.446148636s 1.446588258s 1.456332603s 1.464664156s 1.469505998s 1.473163269s 1.483520752s 1.483608668s 1.484082311s 1.491495129s 1.493129211s 1.500353344s 1.507334869s 1.509582855s 1.517900588s 1.519537872s 1.533042644s 1.536788541s 1.622941853s 1.644021476s 1.649979921s 1.771950292s] Jan 24 00:19:01.081: INFO: 50 %ile: 1.125509703s Jan 24 00:19:01.081: INFO: 90 %ile: 1.456332603s Jan 24 00:19:01.081: INFO: 99 %ile: 1.649979921s Jan 24 00:19:01.081: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:19:01.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-9413" for this suite. • [SLOW TEST:24.723 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":278,"completed":109,"skipped":1645,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:19:01.141: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Jan 24 00:19:15.823: INFO: Successfully updated pod "adopt-release-fdqmm" STEP: Checking that the Job readopts the Pod Jan 24 00:19:15.823: INFO: Waiting up to 15m0s for pod "adopt-release-fdqmm" in namespace "job-2519" to be "adopted" Jan 24 00:19:15.829: INFO: Pod "adopt-release-fdqmm": Phase="Running", Reason="", readiness=true. Elapsed: 6.159418ms Jan 24 00:19:17.843: INFO: Pod "adopt-release-fdqmm": Phase="Running", Reason="", readiness=true. Elapsed: 2.020051086s Jan 24 00:19:17.843: INFO: Pod "adopt-release-fdqmm" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Jan 24 00:19:18.358: INFO: Successfully updated pod "adopt-release-fdqmm" STEP: Checking that the Job releases the Pod Jan 24 00:19:18.358: INFO: Waiting up to 15m0s for pod "adopt-release-fdqmm" in namespace "job-2519" to be "released" Jan 24 00:19:18.388: INFO: Pod "adopt-release-fdqmm": Phase="Running", Reason="", readiness=true. Elapsed: 29.515413ms Jan 24 00:19:18.388: INFO: Pod "adopt-release-fdqmm" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:19:18.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-2519" for this suite. • [SLOW TEST:17.517 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":278,"completed":110,"skipped":1701,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:19:18.660: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating pod pod-subpath-test-configmap-kj8h STEP: Creating a pod to test atomic-volume-subpath Jan 24 00:19:18.957: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-kj8h" in namespace "subpath-2971" to be "success or failure" Jan 24 00:19:19.056: INFO: Pod "pod-subpath-test-configmap-kj8h": Phase="Pending", Reason="", readiness=false. Elapsed: 99.362852ms Jan 24 00:19:21.325: INFO: Pod "pod-subpath-test-configmap-kj8h": Phase="Pending", Reason="", readiness=false. Elapsed: 2.368172125s Jan 24 00:19:23.536: INFO: Pod "pod-subpath-test-configmap-kj8h": Phase="Pending", Reason="", readiness=false. Elapsed: 4.579244815s Jan 24 00:19:26.996: INFO: Pod "pod-subpath-test-configmap-kj8h": Phase="Pending", Reason="", readiness=false. Elapsed: 8.039834373s Jan 24 00:19:29.021: INFO: Pod "pod-subpath-test-configmap-kj8h": Phase="Pending", Reason="", readiness=false. Elapsed: 10.064720198s Jan 24 00:19:31.068: INFO: Pod "pod-subpath-test-configmap-kj8h": Phase="Pending", Reason="", readiness=false. Elapsed: 12.111200495s Jan 24 00:19:33.105: INFO: Pod "pod-subpath-test-configmap-kj8h": Phase="Running", Reason="", readiness=true. Elapsed: 14.148108724s Jan 24 00:19:35.135: INFO: Pod "pod-subpath-test-configmap-kj8h": Phase="Running", Reason="", readiness=true. Elapsed: 16.178260092s Jan 24 00:19:37.147: INFO: Pod "pod-subpath-test-configmap-kj8h": Phase="Running", Reason="", readiness=true. Elapsed: 18.18993245s Jan 24 00:19:39.155: INFO: Pod "pod-subpath-test-configmap-kj8h": Phase="Running", Reason="", readiness=true. Elapsed: 20.197837494s Jan 24 00:19:41.196: INFO: Pod "pod-subpath-test-configmap-kj8h": Phase="Running", Reason="", readiness=true. Elapsed: 22.23983475s Jan 24 00:19:43.203: INFO: Pod "pod-subpath-test-configmap-kj8h": Phase="Running", Reason="", readiness=true. Elapsed: 24.246642464s Jan 24 00:19:45.211: INFO: Pod "pod-subpath-test-configmap-kj8h": Phase="Running", Reason="", readiness=true. Elapsed: 26.254121567s Jan 24 00:19:47.219: INFO: Pod "pod-subpath-test-configmap-kj8h": Phase="Running", Reason="", readiness=true. Elapsed: 28.262425006s Jan 24 00:19:49.226: INFO: Pod "pod-subpath-test-configmap-kj8h": Phase="Running", Reason="", readiness=true. Elapsed: 30.269497833s Jan 24 00:19:51.377: INFO: Pod "pod-subpath-test-configmap-kj8h": Phase="Running", Reason="", readiness=true. Elapsed: 32.420021826s Jan 24 00:19:54.171: INFO: Pod "pod-subpath-test-configmap-kj8h": Phase="Succeeded", Reason="", readiness=false. Elapsed: 35.21384197s STEP: Saw pod success Jan 24 00:19:54.171: INFO: Pod "pod-subpath-test-configmap-kj8h" satisfied condition "success or failure" Jan 24 00:19:54.182: INFO: Trying to get logs from node jerma-server-mvvl6gufaqub pod pod-subpath-test-configmap-kj8h container test-container-subpath-configmap-kj8h: STEP: delete the pod Jan 24 00:19:54.540: INFO: Waiting for pod pod-subpath-test-configmap-kj8h to disappear Jan 24 00:19:54.546: INFO: Pod pod-subpath-test-configmap-kj8h no longer exists STEP: Deleting pod pod-subpath-test-configmap-kj8h Jan 24 00:19:54.546: INFO: Deleting pod "pod-subpath-test-configmap-kj8h" in namespace "subpath-2971" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:19:54.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2971" for this suite. • [SLOW TEST:35.946 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":278,"completed":111,"skipped":1712,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:19:54.607: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:20:01.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8126" for this suite. • [SLOW TEST:7.235 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":278,"completed":112,"skipped":1732,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:20:01.843: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Jan 24 00:20:02.102: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-70425d04-3638-4adb-9162-aa8457990b1a" in namespace "security-context-test-5585" to be "success or failure" Jan 24 00:20:02.109: INFO: Pod "busybox-privileged-false-70425d04-3638-4adb-9162-aa8457990b1a": Phase="Pending", Reason="", readiness=false. Elapsed: 7.060034ms Jan 24 00:20:04.116: INFO: Pod "busybox-privileged-false-70425d04-3638-4adb-9162-aa8457990b1a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013311376s Jan 24 00:20:06.120: INFO: Pod "busybox-privileged-false-70425d04-3638-4adb-9162-aa8457990b1a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017565148s Jan 24 00:20:08.125: INFO: Pod "busybox-privileged-false-70425d04-3638-4adb-9162-aa8457990b1a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.022463942s Jan 24 00:20:10.129: INFO: Pod "busybox-privileged-false-70425d04-3638-4adb-9162-aa8457990b1a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.026585849s Jan 24 00:20:12.135: INFO: Pod "busybox-privileged-false-70425d04-3638-4adb-9162-aa8457990b1a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.032617071s Jan 24 00:20:12.135: INFO: Pod "busybox-privileged-false-70425d04-3638-4adb-9162-aa8457990b1a" satisfied condition "success or failure" Jan 24 00:20:12.172: INFO: Got logs for pod "busybox-privileged-false-70425d04-3638-4adb-9162-aa8457990b1a": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:20:12.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-5585" for this suite. • [SLOW TEST:10.346 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 When creating a pod with privileged /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:225 should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":113,"skipped":1746,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:20:12.191: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:687 [It] should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating a service externalname-service with the type=ExternalName in namespace services-208 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-208 I0124 00:20:12.714304 8 runners.go:189] Created replication controller with name: externalname-service, namespace: services-208, replica count: 2 I0124 00:20:15.765339 8 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0124 00:20:18.765914 8 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0124 00:20:21.766474 8 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 24 00:20:21.766: INFO: Creating new exec pod Jan 24 00:20:30.803: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-208 execpodpr9v6 -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Jan 24 00:20:31.149: INFO: stderr: "I0124 00:20:30.943488 2316 log.go:172] (0xc000950a50) (0xc00060de00) Create stream\nI0124 00:20:30.943588 2316 log.go:172] (0xc000950a50) (0xc00060de00) Stream added, broadcasting: 1\nI0124 00:20:30.949680 2316 log.go:172] (0xc000950a50) Reply frame received for 1\nI0124 00:20:30.949767 2316 log.go:172] (0xc000950a50) (0xc0007a8140) Create stream\nI0124 00:20:30.949777 2316 log.go:172] (0xc000950a50) (0xc0007a8140) Stream added, broadcasting: 3\nI0124 00:20:30.951663 2316 log.go:172] (0xc000950a50) Reply frame received for 3\nI0124 00:20:30.951698 2316 log.go:172] (0xc000950a50) (0xc0007a81e0) Create stream\nI0124 00:20:30.951705 2316 log.go:172] (0xc000950a50) (0xc0007a81e0) Stream added, broadcasting: 5\nI0124 00:20:30.954262 2316 log.go:172] (0xc000950a50) Reply frame received for 5\nI0124 00:20:31.034893 2316 log.go:172] (0xc000950a50) Data frame received for 5\nI0124 00:20:31.035007 2316 log.go:172] (0xc0007a81e0) (5) Data frame handling\nI0124 00:20:31.035032 2316 log.go:172] (0xc0007a81e0) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0124 00:20:31.040847 2316 log.go:172] (0xc000950a50) Data frame received for 5\nI0124 00:20:31.040872 2316 log.go:172] (0xc0007a81e0) (5) Data frame handling\nI0124 00:20:31.040877 2316 log.go:172] (0xc0007a81e0) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0124 00:20:31.142122 2316 log.go:172] (0xc000950a50) Data frame received for 1\nI0124 00:20:31.142244 2316 log.go:172] (0xc000950a50) (0xc0007a8140) Stream removed, broadcasting: 3\nI0124 00:20:31.142293 2316 log.go:172] (0xc00060de00) (1) Data frame handling\nI0124 00:20:31.142319 2316 log.go:172] (0xc00060de00) (1) Data frame sent\nI0124 00:20:31.142342 2316 log.go:172] (0xc000950a50) (0xc0007a81e0) Stream removed, broadcasting: 5\nI0124 00:20:31.142385 2316 log.go:172] (0xc000950a50) (0xc00060de00) Stream removed, broadcasting: 1\nI0124 00:20:31.142411 2316 log.go:172] (0xc000950a50) Go away received\nI0124 00:20:31.143095 2316 log.go:172] (0xc000950a50) (0xc00060de00) Stream removed, broadcasting: 1\nI0124 00:20:31.143111 2316 log.go:172] (0xc000950a50) (0xc0007a8140) Stream removed, broadcasting: 3\nI0124 00:20:31.143118 2316 log.go:172] (0xc000950a50) (0xc0007a81e0) Stream removed, broadcasting: 5\n" Jan 24 00:20:31.149: INFO: stdout: "" Jan 24 00:20:31.150: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-208 execpodpr9v6 -- /bin/sh -x -c nc -zv -t -w 2 10.96.97.242 80' Jan 24 00:20:31.487: INFO: stderr: "I0124 00:20:31.301401 2333 log.go:172] (0xc0004878c0) (0xc000a62000) Create stream\nI0124 00:20:31.301462 2333 log.go:172] (0xc0004878c0) (0xc000a62000) Stream added, broadcasting: 1\nI0124 00:20:31.304851 2333 log.go:172] (0xc0004878c0) Reply frame received for 1\nI0124 00:20:31.304874 2333 log.go:172] (0xc0004878c0) (0xc000ab00a0) Create stream\nI0124 00:20:31.304882 2333 log.go:172] (0xc0004878c0) (0xc000ab00a0) Stream added, broadcasting: 3\nI0124 00:20:31.305942 2333 log.go:172] (0xc0004878c0) Reply frame received for 3\nI0124 00:20:31.305958 2333 log.go:172] (0xc0004878c0) (0xc000ab0140) Create stream\nI0124 00:20:31.305965 2333 log.go:172] (0xc0004878c0) (0xc000ab0140) Stream added, broadcasting: 5\nI0124 00:20:31.308608 2333 log.go:172] (0xc0004878c0) Reply frame received for 5\nI0124 00:20:31.391799 2333 log.go:172] (0xc0004878c0) Data frame received for 5\nI0124 00:20:31.391864 2333 log.go:172] (0xc000ab0140) (5) Data frame handling\nI0124 00:20:31.391887 2333 log.go:172] (0xc000ab0140) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.97.242 80\nI0124 00:20:31.395443 2333 log.go:172] (0xc0004878c0) Data frame received for 5\nI0124 00:20:31.395472 2333 log.go:172] (0xc000ab0140) (5) Data frame handling\nI0124 00:20:31.395500 2333 log.go:172] (0xc000ab0140) (5) Data frame sent\nConnection to 10.96.97.242 80 port [tcp/http] succeeded!\nI0124 00:20:31.471401 2333 log.go:172] (0xc0004878c0) Data frame received for 1\nI0124 00:20:31.471485 2333 log.go:172] (0xc0004878c0) (0xc000ab0140) Stream removed, broadcasting: 5\nI0124 00:20:31.471537 2333 log.go:172] (0xc000a62000) (1) Data frame handling\nI0124 00:20:31.471552 2333 log.go:172] (0xc000a62000) (1) Data frame sent\nI0124 00:20:31.471592 2333 log.go:172] (0xc0004878c0) (0xc000ab00a0) Stream removed, broadcasting: 3\nI0124 00:20:31.471703 2333 log.go:172] (0xc0004878c0) (0xc000a62000) Stream removed, broadcasting: 1\nI0124 00:20:31.472383 2333 log.go:172] (0xc0004878c0) (0xc000a62000) Stream removed, broadcasting: 1\nI0124 00:20:31.472398 2333 log.go:172] (0xc0004878c0) (0xc000ab00a0) Stream removed, broadcasting: 3\nI0124 00:20:31.472408 2333 log.go:172] (0xc0004878c0) (0xc000ab0140) Stream removed, broadcasting: 5\n" Jan 24 00:20:31.487: INFO: stdout: "" Jan 24 00:20:31.488: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-208 execpodpr9v6 -- /bin/sh -x -c nc -zv -t -w 2 10.96.2.250 32256' Jan 24 00:20:31.748: INFO: stderr: "I0124 00:20:31.616713 2353 log.go:172] (0xc000a72b00) (0xc0006a9f40) Create stream\nI0124 00:20:31.616799 2353 log.go:172] (0xc000a72b00) (0xc0006a9f40) Stream added, broadcasting: 1\nI0124 00:20:31.619060 2353 log.go:172] (0xc000a72b00) Reply frame received for 1\nI0124 00:20:31.619079 2353 log.go:172] (0xc000a72b00) (0xc000a2a0a0) Create stream\nI0124 00:20:31.619084 2353 log.go:172] (0xc000a72b00) (0xc000a2a0a0) Stream added, broadcasting: 3\nI0124 00:20:31.620159 2353 log.go:172] (0xc000a72b00) Reply frame received for 3\nI0124 00:20:31.620212 2353 log.go:172] (0xc000a72b00) (0xc000a5c0a0) Create stream\nI0124 00:20:31.620220 2353 log.go:172] (0xc000a72b00) (0xc000a5c0a0) Stream added, broadcasting: 5\nI0124 00:20:31.622097 2353 log.go:172] (0xc000a72b00) Reply frame received for 5\nI0124 00:20:31.683971 2353 log.go:172] (0xc000a72b00) Data frame received for 5\nI0124 00:20:31.684069 2353 log.go:172] (0xc000a5c0a0) (5) Data frame handling\nI0124 00:20:31.684088 2353 log.go:172] (0xc000a5c0a0) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.2.250 32256\nI0124 00:20:31.685678 2353 log.go:172] (0xc000a72b00) Data frame received for 5\nI0124 00:20:31.685693 2353 log.go:172] (0xc000a5c0a0) (5) Data frame handling\nI0124 00:20:31.685701 2353 log.go:172] (0xc000a5c0a0) (5) Data frame sent\nConnection to 10.96.2.250 32256 port [tcp/32256] succeeded!\nI0124 00:20:31.743070 2353 log.go:172] (0xc000a72b00) Data frame received for 1\nI0124 00:20:31.743220 2353 log.go:172] (0xc000a72b00) (0xc000a2a0a0) Stream removed, broadcasting: 3\nI0124 00:20:31.743310 2353 log.go:172] (0xc0006a9f40) (1) Data frame handling\nI0124 00:20:31.743374 2353 log.go:172] (0xc0006a9f40) (1) Data frame sent\nI0124 00:20:31.743414 2353 log.go:172] (0xc000a72b00) (0xc000a5c0a0) Stream removed, broadcasting: 5\nI0124 00:20:31.743453 2353 log.go:172] (0xc000a72b00) (0xc0006a9f40) Stream removed, broadcasting: 1\nI0124 00:20:31.743465 2353 log.go:172] (0xc000a72b00) Go away received\nI0124 00:20:31.744095 2353 log.go:172] (0xc000a72b00) (0xc0006a9f40) Stream removed, broadcasting: 1\nI0124 00:20:31.744113 2353 log.go:172] (0xc000a72b00) (0xc000a2a0a0) Stream removed, broadcasting: 3\nI0124 00:20:31.744122 2353 log.go:172] (0xc000a72b00) (0xc000a5c0a0) Stream removed, broadcasting: 5\n" Jan 24 00:20:31.748: INFO: stdout: "" Jan 24 00:20:31.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-208 execpodpr9v6 -- /bin/sh -x -c nc -zv -t -w 2 10.96.1.234 32256' Jan 24 00:20:32.068: INFO: stderr: "I0124 00:20:31.919837 2371 log.go:172] (0xc000970000) (0xc000c2e1e0) Create stream\nI0124 00:20:31.920094 2371 log.go:172] (0xc000970000) (0xc000c2e1e0) Stream added, broadcasting: 1\nI0124 00:20:31.925476 2371 log.go:172] (0xc000970000) Reply frame received for 1\nI0124 00:20:31.925533 2371 log.go:172] (0xc000970000) (0xc000c2e280) Create stream\nI0124 00:20:31.925561 2371 log.go:172] (0xc000970000) (0xc000c2e280) Stream added, broadcasting: 3\nI0124 00:20:31.926795 2371 log.go:172] (0xc000970000) Reply frame received for 3\nI0124 00:20:31.926832 2371 log.go:172] (0xc000970000) (0xc0003fdcc0) Create stream\nI0124 00:20:31.926842 2371 log.go:172] (0xc000970000) (0xc0003fdcc0) Stream added, broadcasting: 5\nI0124 00:20:31.928082 2371 log.go:172] (0xc000970000) Reply frame received for 5\nI0124 00:20:31.994539 2371 log.go:172] (0xc000970000) Data frame received for 5\nI0124 00:20:31.994605 2371 log.go:172] (0xc0003fdcc0) (5) Data frame handling\nI0124 00:20:31.994627 2371 log.go:172] (0xc0003fdcc0) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.1.234 32256\nI0124 00:20:31.996233 2371 log.go:172] (0xc000970000) Data frame received for 5\nI0124 00:20:31.996275 2371 log.go:172] (0xc0003fdcc0) (5) Data frame handling\nI0124 00:20:31.996299 2371 log.go:172] (0xc0003fdcc0) (5) Data frame sent\nConnection to 10.96.1.234 32256 port [tcp/32256] succeeded!\nI0124 00:20:32.061943 2371 log.go:172] (0xc000970000) Data frame received for 1\nI0124 00:20:32.062060 2371 log.go:172] (0xc000c2e1e0) (1) Data frame handling\nI0124 00:20:32.062098 2371 log.go:172] (0xc000c2e1e0) (1) Data frame sent\nI0124 00:20:32.062173 2371 log.go:172] (0xc000970000) (0xc000c2e1e0) Stream removed, broadcasting: 1\nI0124 00:20:32.063034 2371 log.go:172] (0xc000970000) (0xc000c2e280) Stream removed, broadcasting: 3\nI0124 00:20:32.063103 2371 log.go:172] (0xc000970000) (0xc0003fdcc0) Stream removed, broadcasting: 5\nI0124 00:20:32.063151 2371 log.go:172] (0xc000970000) (0xc000c2e1e0) Stream removed, broadcasting: 1\nI0124 00:20:32.063176 2371 log.go:172] (0xc000970000) (0xc000c2e280) Stream removed, broadcasting: 3\nI0124 00:20:32.063200 2371 log.go:172] (0xc000970000) (0xc0003fdcc0) Stream removed, broadcasting: 5\n" Jan 24 00:20:32.068: INFO: stdout: "" Jan 24 00:20:32.068: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:20:32.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-208" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 • [SLOW TEST:20.006 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":278,"completed":114,"skipped":1801,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:20:32.198: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Jan 24 00:20:32.288: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-293 /api/v1/namespaces/watch-293/configmaps/e2e-watch-test-configmap-a 7d277309-1a8a-4614-b66a-5e1fa4023aa9 3914462 0 2020-01-24 00:20:32 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Jan 24 00:20:32.288: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-293 /api/v1/namespaces/watch-293/configmaps/e2e-watch-test-configmap-a 7d277309-1a8a-4614-b66a-5e1fa4023aa9 3914462 0 2020-01-24 00:20:32 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Jan 24 00:20:42.296: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-293 /api/v1/namespaces/watch-293/configmaps/e2e-watch-test-configmap-a 7d277309-1a8a-4614-b66a-5e1fa4023aa9 3914511 0 2020-01-24 00:20:32 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Jan 24 00:20:42.296: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-293 /api/v1/namespaces/watch-293/configmaps/e2e-watch-test-configmap-a 7d277309-1a8a-4614-b66a-5e1fa4023aa9 3914511 0 2020-01-24 00:20:32 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Jan 24 00:20:52.310: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-293 /api/v1/namespaces/watch-293/configmaps/e2e-watch-test-configmap-a 7d277309-1a8a-4614-b66a-5e1fa4023aa9 3914535 0 2020-01-24 00:20:32 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jan 24 00:20:52.310: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-293 /api/v1/namespaces/watch-293/configmaps/e2e-watch-test-configmap-a 7d277309-1a8a-4614-b66a-5e1fa4023aa9 3914535 0 2020-01-24 00:20:32 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Jan 24 00:21:02.322: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-293 /api/v1/namespaces/watch-293/configmaps/e2e-watch-test-configmap-a 7d277309-1a8a-4614-b66a-5e1fa4023aa9 3914557 0 2020-01-24 00:20:32 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jan 24 00:21:02.322: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-293 /api/v1/namespaces/watch-293/configmaps/e2e-watch-test-configmap-a 7d277309-1a8a-4614-b66a-5e1fa4023aa9 3914557 0 2020-01-24 00:20:32 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Jan 24 00:21:12.331: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-293 /api/v1/namespaces/watch-293/configmaps/e2e-watch-test-configmap-b 87b826e2-901e-45ae-877a-44e859fb3e43 3914579 0 2020-01-24 00:21:12 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Jan 24 00:21:12.332: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-293 /api/v1/namespaces/watch-293/configmaps/e2e-watch-test-configmap-b 87b826e2-901e-45ae-877a-44e859fb3e43 3914579 0 2020-01-24 00:21:12 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Jan 24 00:21:22.342: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-293 /api/v1/namespaces/watch-293/configmaps/e2e-watch-test-configmap-b 87b826e2-901e-45ae-877a-44e859fb3e43 3914603 0 2020-01-24 00:21:12 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Jan 24 00:21:22.342: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-293 /api/v1/namespaces/watch-293/configmaps/e2e-watch-test-configmap-b 87b826e2-901e-45ae-877a-44e859fb3e43 3914603 0 2020-01-24 00:21:12 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:21:32.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-293" for this suite. • [SLOW TEST:60.159 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":278,"completed":115,"skipped":1830,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:21:32.358: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Jan 24 00:21:48.587: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 24 00:21:48.816: INFO: Pod pod-with-poststart-exec-hook still exists Jan 24 00:21:50.816: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 24 00:21:50.825: INFO: Pod pod-with-poststart-exec-hook still exists Jan 24 00:21:52.816: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 24 00:21:52.823: INFO: Pod pod-with-poststart-exec-hook still exists Jan 24 00:21:54.816: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 24 00:21:54.823: INFO: Pod pod-with-poststart-exec-hook still exists Jan 24 00:21:56.816: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 24 00:21:56.824: INFO: Pod pod-with-poststart-exec-hook still exists Jan 24 00:21:58.816: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 24 00:21:58.824: INFO: Pod pod-with-poststart-exec-hook still exists Jan 24 00:22:00.816: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 24 00:22:00.823: INFO: Pod pod-with-poststart-exec-hook still exists Jan 24 00:22:02.816: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 24 00:22:02.825: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:22:02.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-4747" for this suite. • [SLOW TEST:30.481 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":278,"completed":116,"skipped":1843,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:22:02.840: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 24 00:22:03.673: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 24 00:22:05.689: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715422123, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715422123, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715422123, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715422123, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 24 00:22:07.695: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715422123, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715422123, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715422123, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715422123, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 24 00:22:09.696: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715422123, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715422123, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715422123, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715422123, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 24 00:22:12.793: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Jan 24 00:22:12.823: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:22:12.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5950" for this suite. STEP: Destroying namespace "webhook-5950-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:10.267 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":278,"completed":117,"skipped":1853,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:22:13.107: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0124 00:22:43.450017 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 24 00:22:43.450: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:22:43.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2389" for this suite. • [SLOW TEST:30.357 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":278,"completed":118,"skipped":1879,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:22:43.464: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 24 00:22:44.409: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 24 00:22:46.423: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715422164, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715422164, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715422164, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715422164, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 24 00:22:48.429: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715422164, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715422164, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715422164, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715422164, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 24 00:22:51.273: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715422164, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715422164, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715422164, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715422164, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 24 00:22:52.433: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715422164, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715422164, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715422164, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715422164, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 24 00:22:54.460: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715422164, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715422164, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715422164, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715422164, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 24 00:22:57.481: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:23:09.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1823" for this suite. STEP: Destroying namespace "webhook-1823-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:26.478 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":278,"completed":119,"skipped":1912,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:23:09.943: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:279 [BeforeEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1465 STEP: creating an pod Jan 24 00:23:10.051: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run logs-generator --generator=run-pod/v1 --image=gcr.io/kubernetes-e2e-test-images/agnhost:2.8 --namespace=kubectl-8822 -- logs-generator --log-lines-total 100 --run-duration 20s' Jan 24 00:23:10.280: INFO: stderr: "" Jan 24 00:23:10.280: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Waiting for log generator to start. Jan 24 00:23:10.281: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Jan 24 00:23:10.281: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-8822" to be "running and ready, or succeeded" Jan 24 00:23:10.284: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 3.240478ms Jan 24 00:23:12.291: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010254434s Jan 24 00:23:14.301: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 4.020019951s Jan 24 00:23:16.308: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 6.02710051s Jan 24 00:23:18.319: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 8.038274075s Jan 24 00:23:20.331: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 10.049801708s Jan 24 00:23:20.331: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Jan 24 00:23:20.331: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Jan 24 00:23:20.331: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8822' Jan 24 00:23:20.502: INFO: stderr: "" Jan 24 00:23:20.502: INFO: stdout: "I0124 00:23:17.197692 1 logs_generator.go:76] 0 GET /api/v1/namespaces/kube-system/pods/dglf 324\nI0124 00:23:17.397899 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/kube-system/pods/9b7t 466\nI0124 00:23:17.598074 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/default/pods/gx7 598\nI0124 00:23:17.798266 1 logs_generator.go:76] 3 GET /api/v1/namespaces/default/pods/j89 514\nI0124 00:23:17.998175 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/kube-system/pods/q4xf 319\nI0124 00:23:18.197926 1 logs_generator.go:76] 5 GET /api/v1/namespaces/default/pods/xcf 431\nI0124 00:23:18.398034 1 logs_generator.go:76] 6 GET /api/v1/namespaces/kube-system/pods/blw 433\nI0124 00:23:18.598475 1 logs_generator.go:76] 7 GET /api/v1/namespaces/ns/pods/kds8 540\nI0124 00:23:18.798276 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/kube-system/pods/r8r 347\nI0124 00:23:18.998013 1 logs_generator.go:76] 9 POST /api/v1/namespaces/ns/pods/65ll 318\nI0124 00:23:19.197974 1 logs_generator.go:76] 10 POST /api/v1/namespaces/default/pods/vlsb 290\nI0124 00:23:19.398363 1 logs_generator.go:76] 11 PUT /api/v1/namespaces/default/pods/6lvv 522\nI0124 00:23:19.598343 1 logs_generator.go:76] 12 GET /api/v1/namespaces/default/pods/tx7 201\nI0124 00:23:19.798183 1 logs_generator.go:76] 13 PUT /api/v1/namespaces/default/pods/9cz 475\nI0124 00:23:19.998064 1 logs_generator.go:76] 14 GET /api/v1/namespaces/ns/pods/5wgg 265\nI0124 00:23:20.198101 1 logs_generator.go:76] 15 PUT /api/v1/namespaces/ns/pods/kj8 409\nI0124 00:23:20.398053 1 logs_generator.go:76] 16 GET /api/v1/namespaces/kube-system/pods/r2pw 371\n" STEP: limiting log lines Jan 24 00:23:20.502: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8822 --tail=1' Jan 24 00:23:20.610: INFO: stderr: "" Jan 24 00:23:20.610: INFO: stdout: "I0124 00:23:20.597944 1 logs_generator.go:76] 17 POST /api/v1/namespaces/default/pods/nxtg 339\n" Jan 24 00:23:20.610: INFO: got output "I0124 00:23:20.597944 1 logs_generator.go:76] 17 POST /api/v1/namespaces/default/pods/nxtg 339\n" STEP: limiting log bytes Jan 24 00:23:20.610: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8822 --limit-bytes=1' Jan 24 00:23:20.735: INFO: stderr: "" Jan 24 00:23:20.735: INFO: stdout: "I" Jan 24 00:23:20.735: INFO: got output "I" STEP: exposing timestamps Jan 24 00:23:20.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8822 --tail=1 --timestamps' Jan 24 00:23:20.825: INFO: stderr: "" Jan 24 00:23:20.825: INFO: stdout: "2020-01-24T00:23:20.798501692Z I0124 00:23:20.798050 1 logs_generator.go:76] 18 GET /api/v1/namespaces/default/pods/8gw 394\n" Jan 24 00:23:20.825: INFO: got output "2020-01-24T00:23:20.798501692Z I0124 00:23:20.798050 1 logs_generator.go:76] 18 GET /api/v1/namespaces/default/pods/8gw 394\n" STEP: restricting to a time range Jan 24 00:23:23.325: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8822 --since=1s' Jan 24 00:23:23.510: INFO: stderr: "" Jan 24 00:23:23.510: INFO: stdout: "I0124 00:23:22.598052 1 logs_generator.go:76] 27 POST /api/v1/namespaces/ns/pods/rcj 432\nI0124 00:23:22.798089 1 logs_generator.go:76] 28 POST /api/v1/namespaces/ns/pods/hgg 575\nI0124 00:23:22.998046 1 logs_generator.go:76] 29 POST /api/v1/namespaces/kube-system/pods/vw9 224\nI0124 00:23:23.198122 1 logs_generator.go:76] 30 GET /api/v1/namespaces/kube-system/pods/9k86 328\nI0124 00:23:23.398563 1 logs_generator.go:76] 31 GET /api/v1/namespaces/default/pods/jvc4 464\n" Jan 24 00:23:23.510: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8822 --since=24h' Jan 24 00:23:23.648: INFO: stderr: "" Jan 24 00:23:23.648: INFO: stdout: "I0124 00:23:17.197692 1 logs_generator.go:76] 0 GET /api/v1/namespaces/kube-system/pods/dglf 324\nI0124 00:23:17.397899 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/kube-system/pods/9b7t 466\nI0124 00:23:17.598074 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/default/pods/gx7 598\nI0124 00:23:17.798266 1 logs_generator.go:76] 3 GET /api/v1/namespaces/default/pods/j89 514\nI0124 00:23:17.998175 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/kube-system/pods/q4xf 319\nI0124 00:23:18.197926 1 logs_generator.go:76] 5 GET /api/v1/namespaces/default/pods/xcf 431\nI0124 00:23:18.398034 1 logs_generator.go:76] 6 GET /api/v1/namespaces/kube-system/pods/blw 433\nI0124 00:23:18.598475 1 logs_generator.go:76] 7 GET /api/v1/namespaces/ns/pods/kds8 540\nI0124 00:23:18.798276 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/kube-system/pods/r8r 347\nI0124 00:23:18.998013 1 logs_generator.go:76] 9 POST /api/v1/namespaces/ns/pods/65ll 318\nI0124 00:23:19.197974 1 logs_generator.go:76] 10 POST /api/v1/namespaces/default/pods/vlsb 290\nI0124 00:23:19.398363 1 logs_generator.go:76] 11 PUT /api/v1/namespaces/default/pods/6lvv 522\nI0124 00:23:19.598343 1 logs_generator.go:76] 12 GET /api/v1/namespaces/default/pods/tx7 201\nI0124 00:23:19.798183 1 logs_generator.go:76] 13 PUT /api/v1/namespaces/default/pods/9cz 475\nI0124 00:23:19.998064 1 logs_generator.go:76] 14 GET /api/v1/namespaces/ns/pods/5wgg 265\nI0124 00:23:20.198101 1 logs_generator.go:76] 15 PUT /api/v1/namespaces/ns/pods/kj8 409\nI0124 00:23:20.398053 1 logs_generator.go:76] 16 GET /api/v1/namespaces/kube-system/pods/r2pw 371\nI0124 00:23:20.597944 1 logs_generator.go:76] 17 POST /api/v1/namespaces/default/pods/nxtg 339\nI0124 00:23:20.798050 1 logs_generator.go:76] 18 GET /api/v1/namespaces/default/pods/8gw 394\nI0124 00:23:20.997914 1 logs_generator.go:76] 19 PUT /api/v1/namespaces/ns/pods/lgrr 403\nI0124 00:23:21.197911 1 logs_generator.go:76] 20 GET /api/v1/namespaces/default/pods/dcf 352\nI0124 00:23:21.398079 1 logs_generator.go:76] 21 PUT /api/v1/namespaces/default/pods/4p26 496\nI0124 00:23:21.598126 1 logs_generator.go:76] 22 POST /api/v1/namespaces/kube-system/pods/wqq 597\nI0124 00:23:21.798108 1 logs_generator.go:76] 23 PUT /api/v1/namespaces/ns/pods/qdxr 291\nI0124 00:23:21.998115 1 logs_generator.go:76] 24 POST /api/v1/namespaces/default/pods/87p 217\nI0124 00:23:22.197879 1 logs_generator.go:76] 25 POST /api/v1/namespaces/default/pods/kgt 454\nI0124 00:23:22.398304 1 logs_generator.go:76] 26 GET /api/v1/namespaces/default/pods/v4r 318\nI0124 00:23:22.598052 1 logs_generator.go:76] 27 POST /api/v1/namespaces/ns/pods/rcj 432\nI0124 00:23:22.798089 1 logs_generator.go:76] 28 POST /api/v1/namespaces/ns/pods/hgg 575\nI0124 00:23:22.998046 1 logs_generator.go:76] 29 POST /api/v1/namespaces/kube-system/pods/vw9 224\nI0124 00:23:23.198122 1 logs_generator.go:76] 30 GET /api/v1/namespaces/kube-system/pods/9k86 328\nI0124 00:23:23.398563 1 logs_generator.go:76] 31 GET /api/v1/namespaces/default/pods/jvc4 464\nI0124 00:23:23.598283 1 logs_generator.go:76] 32 GET /api/v1/namespaces/ns/pods/qp5d 279\n" [AfterEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1471 Jan 24 00:23:23.649: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-8822' Jan 24 00:23:28.802: INFO: stderr: "" Jan 24 00:23:28.802: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:23:28.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8822" for this suite. • [SLOW TEST:18.938 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":278,"completed":120,"skipped":1929,"failed":0} SS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:23:28.882: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:687 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating service endpoint-test2 in namespace services-3442 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3442 to expose endpoints map[] Jan 24 00:23:28.970: INFO: successfully validated that service endpoint-test2 in namespace services-3442 exposes endpoints map[] (7.617676ms elapsed) STEP: Creating pod pod1 in namespace services-3442 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3442 to expose endpoints map[pod1:[80]] Jan 24 00:23:33.094: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.072207712s elapsed, will retry) Jan 24 00:23:37.152: INFO: successfully validated that service endpoint-test2 in namespace services-3442 exposes endpoints map[pod1:[80]] (8.129833713s elapsed) STEP: Creating pod pod2 in namespace services-3442 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3442 to expose endpoints map[pod1:[80] pod2:[80]] Jan 24 00:23:41.563: INFO: Unexpected endpoints: found map[81c37213-1736-4362-afa7-ce344d0ff300:[80]], expected map[pod1:[80] pod2:[80]] (4.401802585s elapsed, will retry) Jan 24 00:23:43.677: INFO: successfully validated that service endpoint-test2 in namespace services-3442 exposes endpoints map[pod1:[80] pod2:[80]] (6.516129724s elapsed) STEP: Deleting pod pod1 in namespace services-3442 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3442 to expose endpoints map[pod2:[80]] Jan 24 00:23:43.786: INFO: successfully validated that service endpoint-test2 in namespace services-3442 exposes endpoints map[pod2:[80]] (82.138732ms elapsed) STEP: Deleting pod pod2 in namespace services-3442 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3442 to expose endpoints map[] Jan 24 00:23:44.831: INFO: successfully validated that service endpoint-test2 in namespace services-3442 exposes endpoints map[] (1.027807043s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:23:44.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3442" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 • [SLOW TEST:16.063 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":278,"completed":121,"skipped":1931,"failed":0} SSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:23:44.946: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name configmap-test-volume-a19531f1-64cf-4e85-a623-80414dcb45a4 STEP: Creating a pod to test consume configMaps Jan 24 00:23:45.079: INFO: Waiting up to 5m0s for pod "pod-configmaps-5f2842ea-0b83-4f84-9ef1-5ddbab4b5bfd" in namespace "configmap-6278" to be "success or failure" Jan 24 00:23:45.091: INFO: Pod "pod-configmaps-5f2842ea-0b83-4f84-9ef1-5ddbab4b5bfd": Phase="Pending", Reason="", readiness=false. Elapsed: 12.201046ms Jan 24 00:23:47.421: INFO: Pod "pod-configmaps-5f2842ea-0b83-4f84-9ef1-5ddbab4b5bfd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.342295781s Jan 24 00:23:49.969: INFO: Pod "pod-configmaps-5f2842ea-0b83-4f84-9ef1-5ddbab4b5bfd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.889973081s Jan 24 00:23:51.984: INFO: Pod "pod-configmaps-5f2842ea-0b83-4f84-9ef1-5ddbab4b5bfd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.905600388s Jan 24 00:23:53.990: INFO: Pod "pod-configmaps-5f2842ea-0b83-4f84-9ef1-5ddbab4b5bfd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.911112168s Jan 24 00:23:55.993: INFO: Pod "pod-configmaps-5f2842ea-0b83-4f84-9ef1-5ddbab4b5bfd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.914355057s STEP: Saw pod success Jan 24 00:23:55.993: INFO: Pod "pod-configmaps-5f2842ea-0b83-4f84-9ef1-5ddbab4b5bfd" satisfied condition "success or failure" Jan 24 00:23:55.995: INFO: Trying to get logs from node jerma-node pod pod-configmaps-5f2842ea-0b83-4f84-9ef1-5ddbab4b5bfd container configmap-volume-test: STEP: delete the pod Jan 24 00:23:56.081: INFO: Waiting for pod pod-configmaps-5f2842ea-0b83-4f84-9ef1-5ddbab4b5bfd to disappear Jan 24 00:23:56.086: INFO: Pod pod-configmaps-5f2842ea-0b83-4f84-9ef1-5ddbab4b5bfd no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:23:56.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6278" for this suite. • [SLOW TEST:11.161 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":122,"skipped":1934,"failed":0} SSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:23:56.109: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Jan 24 00:23:56.173: INFO: Creating ReplicaSet my-hostname-basic-7522ec9a-0ccc-4281-bd14-d4f8d083ffe7 Jan 24 00:23:56.239: INFO: Pod name my-hostname-basic-7522ec9a-0ccc-4281-bd14-d4f8d083ffe7: Found 0 pods out of 1 Jan 24 00:24:01.264: INFO: Pod name my-hostname-basic-7522ec9a-0ccc-4281-bd14-d4f8d083ffe7: Found 1 pods out of 1 Jan 24 00:24:01.264: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-7522ec9a-0ccc-4281-bd14-d4f8d083ffe7" is running Jan 24 00:24:05.290: INFO: Pod "my-hostname-basic-7522ec9a-0ccc-4281-bd14-d4f8d083ffe7-4qvjh" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-24 00:23:56 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-24 00:23:56 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-7522ec9a-0ccc-4281-bd14-d4f8d083ffe7]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-24 00:23:56 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-7522ec9a-0ccc-4281-bd14-d4f8d083ffe7]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-24 00:23:56 +0000 UTC Reason: Message:}]) Jan 24 00:24:05.291: INFO: Trying to dial the pod Jan 24 00:24:10.338: INFO: Controller my-hostname-basic-7522ec9a-0ccc-4281-bd14-d4f8d083ffe7: Got expected result from replica 1 [my-hostname-basic-7522ec9a-0ccc-4281-bd14-d4f8d083ffe7-4qvjh]: "my-hostname-basic-7522ec9a-0ccc-4281-bd14-d4f8d083ffe7-4qvjh", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:24:10.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-2221" for this suite. • [SLOW TEST:14.244 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":123,"skipped":1938,"failed":0} SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:24:10.353: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:279 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:331 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating the initial replication controller Jan 24 00:24:10.523: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5260' Jan 24 00:24:10.963: INFO: stderr: "" Jan 24 00:24:10.963: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 24 00:24:10.964: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5260' Jan 24 00:24:11.137: INFO: stderr: "" Jan 24 00:24:11.137: INFO: stdout: "update-demo-nautilus-6sl5k update-demo-nautilus-gz8l7 " Jan 24 00:24:11.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6sl5k -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5260' Jan 24 00:24:11.253: INFO: stderr: "" Jan 24 00:24:11.253: INFO: stdout: "" Jan 24 00:24:11.253: INFO: update-demo-nautilus-6sl5k is created but not running Jan 24 00:24:16.254: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5260' Jan 24 00:24:16.435: INFO: stderr: "" Jan 24 00:24:16.435: INFO: stdout: "update-demo-nautilus-6sl5k update-demo-nautilus-gz8l7 " Jan 24 00:24:16.435: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6sl5k -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5260' Jan 24 00:24:16.565: INFO: stderr: "" Jan 24 00:24:16.565: INFO: stdout: "" Jan 24 00:24:16.565: INFO: update-demo-nautilus-6sl5k is created but not running Jan 24 00:24:21.565: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5260' Jan 24 00:24:21.745: INFO: stderr: "" Jan 24 00:24:21.745: INFO: stdout: "update-demo-nautilus-6sl5k update-demo-nautilus-gz8l7 " Jan 24 00:24:21.745: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6sl5k -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5260' Jan 24 00:24:21.847: INFO: stderr: "" Jan 24 00:24:21.847: INFO: stdout: "" Jan 24 00:24:21.847: INFO: update-demo-nautilus-6sl5k is created but not running Jan 24 00:24:26.848: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5260' Jan 24 00:24:26.982: INFO: stderr: "" Jan 24 00:24:26.982: INFO: stdout: "update-demo-nautilus-6sl5k update-demo-nautilus-gz8l7 " Jan 24 00:24:26.982: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6sl5k -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5260' Jan 24 00:24:27.090: INFO: stderr: "" Jan 24 00:24:27.090: INFO: stdout: "true" Jan 24 00:24:27.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6sl5k -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5260' Jan 24 00:24:27.186: INFO: stderr: "" Jan 24 00:24:27.186: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 24 00:24:27.186: INFO: validating pod update-demo-nautilus-6sl5k Jan 24 00:24:27.196: INFO: got data: { "image": "nautilus.jpg" } Jan 24 00:24:27.196: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 24 00:24:27.196: INFO: update-demo-nautilus-6sl5k is verified up and running Jan 24 00:24:27.196: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gz8l7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5260' Jan 24 00:24:27.292: INFO: stderr: "" Jan 24 00:24:27.292: INFO: stdout: "true" Jan 24 00:24:27.292: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gz8l7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5260' Jan 24 00:24:27.372: INFO: stderr: "" Jan 24 00:24:27.372: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 24 00:24:27.372: INFO: validating pod update-demo-nautilus-gz8l7 Jan 24 00:24:27.376: INFO: got data: { "image": "nautilus.jpg" } Jan 24 00:24:27.376: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 24 00:24:27.376: INFO: update-demo-nautilus-gz8l7 is verified up and running STEP: rolling-update to new replication controller Jan 24 00:24:27.378: INFO: scanned /root for discovery docs: Jan 24 00:24:27.378: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-5260' Jan 24 00:24:58.763: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Jan 24 00:24:58.764: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 24 00:24:58.764: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5260' Jan 24 00:24:58.982: INFO: stderr: "" Jan 24 00:24:58.982: INFO: stdout: "update-demo-kitten-ks6j9 update-demo-kitten-x46z5 update-demo-nautilus-gz8l7 " STEP: Replicas for name=update-demo: expected=2 actual=3 Jan 24 00:25:03.983: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5260' Jan 24 00:25:04.064: INFO: stderr: "" Jan 24 00:25:04.064: INFO: stdout: "update-demo-kitten-ks6j9 update-demo-kitten-x46z5 " Jan 24 00:25:04.064: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-ks6j9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5260' Jan 24 00:25:04.147: INFO: stderr: "" Jan 24 00:25:04.147: INFO: stdout: "true" Jan 24 00:25:04.147: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-ks6j9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5260' Jan 24 00:25:04.229: INFO: stderr: "" Jan 24 00:25:04.229: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Jan 24 00:25:04.229: INFO: validating pod update-demo-kitten-ks6j9 Jan 24 00:25:04.249: INFO: got data: { "image": "kitten.jpg" } Jan 24 00:25:04.249: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Jan 24 00:25:04.250: INFO: update-demo-kitten-ks6j9 is verified up and running Jan 24 00:25:04.250: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-x46z5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5260' Jan 24 00:25:04.364: INFO: stderr: "" Jan 24 00:25:04.364: INFO: stdout: "true" Jan 24 00:25:04.364: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-x46z5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5260' Jan 24 00:25:04.474: INFO: stderr: "" Jan 24 00:25:04.474: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Jan 24 00:25:04.474: INFO: validating pod update-demo-kitten-x46z5 Jan 24 00:25:04.490: INFO: got data: { "image": "kitten.jpg" } Jan 24 00:25:04.490: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Jan 24 00:25:04.490: INFO: update-demo-kitten-x46z5 is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:25:04.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5260" for this suite. • [SLOW TEST:54.182 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:329 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance]","total":278,"completed":124,"skipped":1948,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:25:04.536: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Jan 24 00:25:04.619: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Jan 24 00:25:07.595: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1417 create -f -' Jan 24 00:25:10.802: INFO: stderr: "" Jan 24 00:25:10.802: INFO: stdout: "e2e-test-crd-publish-openapi-928-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Jan 24 00:25:10.803: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1417 delete e2e-test-crd-publish-openapi-928-crds test-cr' Jan 24 00:25:12.811: INFO: stderr: "" Jan 24 00:25:12.811: INFO: stdout: "e2e-test-crd-publish-openapi-928-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Jan 24 00:25:12.812: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1417 apply -f -' Jan 24 00:25:13.644: INFO: stderr: "" Jan 24 00:25:13.645: INFO: stdout: "e2e-test-crd-publish-openapi-928-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Jan 24 00:25:13.645: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1417 delete e2e-test-crd-publish-openapi-928-crds test-cr' Jan 24 00:25:14.035: INFO: stderr: "" Jan 24 00:25:14.035: INFO: stdout: "e2e-test-crd-publish-openapi-928-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Jan 24 00:25:14.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-928-crds' Jan 24 00:25:14.586: INFO: stderr: "" Jan 24 00:25:14.586: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-928-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:25:18.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1417" for this suite. • [SLOW TEST:13.866 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":278,"completed":125,"skipped":1974,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:25:18.402: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name projected-configmap-test-volume-map-db7af0c2-09a9-45d4-bec1-13d8ad035933 STEP: Creating a pod to test consume configMaps Jan 24 00:25:18.529: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-9d7e2572-8399-4775-ba0b-e4c824e512a4" in namespace "projected-3968" to be "success or failure" Jan 24 00:25:18.533: INFO: Pod "pod-projected-configmaps-9d7e2572-8399-4775-ba0b-e4c824e512a4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.159299ms Jan 24 00:25:20.541: INFO: Pod "pod-projected-configmaps-9d7e2572-8399-4775-ba0b-e4c824e512a4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011424737s Jan 24 00:25:22.547: INFO: Pod "pod-projected-configmaps-9d7e2572-8399-4775-ba0b-e4c824e512a4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017805559s Jan 24 00:25:24.554: INFO: Pod "pod-projected-configmaps-9d7e2572-8399-4775-ba0b-e4c824e512a4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.02452572s Jan 24 00:25:26.585: INFO: Pod "pod-projected-configmaps-9d7e2572-8399-4775-ba0b-e4c824e512a4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.055518645s STEP: Saw pod success Jan 24 00:25:26.585: INFO: Pod "pod-projected-configmaps-9d7e2572-8399-4775-ba0b-e4c824e512a4" satisfied condition "success or failure" Jan 24 00:25:26.590: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-9d7e2572-8399-4775-ba0b-e4c824e512a4 container projected-configmap-volume-test: STEP: delete the pod Jan 24 00:25:26.844: INFO: Waiting for pod pod-projected-configmaps-9d7e2572-8399-4775-ba0b-e4c824e512a4 to disappear Jan 24 00:25:26.865: INFO: Pod pod-projected-configmaps-9d7e2572-8399-4775-ba0b-e4c824e512a4 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:25:26.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3968" for this suite. • [SLOW TEST:8.478 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":278,"completed":126,"skipped":1994,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:25:26.882: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:279 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: validating cluster-info Jan 24 00:25:26.995: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Jan 24 00:25:27.121: INFO: stderr: "" Jan 24 00:25:27.121: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.193:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.193:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:25:27.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6646" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":278,"completed":127,"skipped":2042,"failed":0} SSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:25:27.128: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 24 00:25:27.917: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 24 00:25:29.940: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715422328, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715422328, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715422328, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715422327, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 24 00:25:31.991: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715422328, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715422328, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715422328, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715422327, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 24 00:25:33.946: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715422328, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715422328, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715422328, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715422327, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 24 00:25:35.954: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715422328, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715422328, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715422328, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715422327, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 24 00:25:39.021: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:25:39.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9192" for this suite. STEP: Destroying namespace "webhook-9192-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:12.256 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":278,"completed":128,"skipped":2046,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:25:39.385: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:25:49.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-1023" for this suite. • [SLOW TEST:10.247 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":278,"completed":129,"skipped":2055,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:25:49.632: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Jan 24 00:26:21.823: INFO: Container started at 2020-01-24 00:25:58 +0000 UTC, pod became ready at 2020-01-24 00:26:19 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:26:21.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-443" for this suite. • [SLOW TEST:32.226 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":278,"completed":130,"skipped":2081,"failed":0} [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:26:21.858: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Performing setup for networking test in namespace pod-network-test-7912 STEP: creating a selector STEP: Creating the service pods in kubernetes Jan 24 00:26:21.971: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jan 24 00:27:00.237: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.1:8080/dial?request=hostname&protocol=http&host=10.44.0.2&port=8080&tries=1'] Namespace:pod-network-test-7912 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 24 00:27:00.237: INFO: >>> kubeConfig: /root/.kube/config I0124 00:27:00.323164 8 log.go:172] (0xc0052ea4d0) (0xc002215900) Create stream I0124 00:27:00.323253 8 log.go:172] (0xc0052ea4d0) (0xc002215900) Stream added, broadcasting: 1 I0124 00:27:00.327285 8 log.go:172] (0xc0052ea4d0) Reply frame received for 1 I0124 00:27:00.327382 8 log.go:172] (0xc0052ea4d0) (0xc0021d60a0) Create stream I0124 00:27:00.327399 8 log.go:172] (0xc0052ea4d0) (0xc0021d60a0) Stream added, broadcasting: 3 I0124 00:27:00.329243 8 log.go:172] (0xc0052ea4d0) Reply frame received for 3 I0124 00:27:00.329288 8 log.go:172] (0xc0052ea4d0) (0xc001aa5ae0) Create stream I0124 00:27:00.329307 8 log.go:172] (0xc0052ea4d0) (0xc001aa5ae0) Stream added, broadcasting: 5 I0124 00:27:00.332537 8 log.go:172] (0xc0052ea4d0) Reply frame received for 5 I0124 00:27:00.464079 8 log.go:172] (0xc0052ea4d0) Data frame received for 3 I0124 00:27:00.464209 8 log.go:172] (0xc0021d60a0) (3) Data frame handling I0124 00:27:00.464252 8 log.go:172] (0xc0021d60a0) (3) Data frame sent I0124 00:27:00.593971 8 log.go:172] (0xc0052ea4d0) (0xc001aa5ae0) Stream removed, broadcasting: 5 I0124 00:27:00.594065 8 log.go:172] (0xc0052ea4d0) Data frame received for 1 I0124 00:27:00.594121 8 log.go:172] (0xc0052ea4d0) (0xc0021d60a0) Stream removed, broadcasting: 3 I0124 00:27:00.594152 8 log.go:172] (0xc002215900) (1) Data frame handling I0124 00:27:00.594177 8 log.go:172] (0xc002215900) (1) Data frame sent I0124 00:27:00.594189 8 log.go:172] (0xc0052ea4d0) (0xc002215900) Stream removed, broadcasting: 1 I0124 00:27:00.594201 8 log.go:172] (0xc0052ea4d0) Go away received I0124 00:27:00.594675 8 log.go:172] (0xc0052ea4d0) (0xc002215900) Stream removed, broadcasting: 1 I0124 00:27:00.594693 8 log.go:172] (0xc0052ea4d0) (0xc0021d60a0) Stream removed, broadcasting: 3 I0124 00:27:00.594702 8 log.go:172] (0xc0052ea4d0) (0xc001aa5ae0) Stream removed, broadcasting: 5 Jan 24 00:27:00.594: INFO: Waiting for responses: map[] Jan 24 00:27:00.600: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.1:8080/dial?request=hostname&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:pod-network-test-7912 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 24 00:27:00.600: INFO: >>> kubeConfig: /root/.kube/config I0124 00:27:00.660030 8 log.go:172] (0xc0052eabb0) (0xc002215e00) Create stream I0124 00:27:00.660197 8 log.go:172] (0xc0052eabb0) (0xc002215e00) Stream added, broadcasting: 1 I0124 00:27:00.663737 8 log.go:172] (0xc0052eabb0) Reply frame received for 1 I0124 00:27:00.663765 8 log.go:172] (0xc0052eabb0) (0xc0021d6140) Create stream I0124 00:27:00.663771 8 log.go:172] (0xc0052eabb0) (0xc0021d6140) Stream added, broadcasting: 3 I0124 00:27:00.665241 8 log.go:172] (0xc0052eabb0) Reply frame received for 3 I0124 00:27:00.665268 8 log.go:172] (0xc0052eabb0) (0xc0027572c0) Create stream I0124 00:27:00.665276 8 log.go:172] (0xc0052eabb0) (0xc0027572c0) Stream added, broadcasting: 5 I0124 00:27:00.668485 8 log.go:172] (0xc0052eabb0) Reply frame received for 5 I0124 00:27:00.742873 8 log.go:172] (0xc0052eabb0) Data frame received for 3 I0124 00:27:00.742978 8 log.go:172] (0xc0021d6140) (3) Data frame handling I0124 00:27:00.742991 8 log.go:172] (0xc0021d6140) (3) Data frame sent I0124 00:27:00.838867 8 log.go:172] (0xc0052eabb0) Data frame received for 1 I0124 00:27:00.839050 8 log.go:172] (0xc0052eabb0) (0xc0021d6140) Stream removed, broadcasting: 3 I0124 00:27:00.839119 8 log.go:172] (0xc002215e00) (1) Data frame handling I0124 00:27:00.839153 8 log.go:172] (0xc002215e00) (1) Data frame sent I0124 00:27:00.839167 8 log.go:172] (0xc0052eabb0) (0xc002215e00) Stream removed, broadcasting: 1 I0124 00:27:00.839400 8 log.go:172] (0xc0052eabb0) (0xc0027572c0) Stream removed, broadcasting: 5 I0124 00:27:00.839423 8 log.go:172] (0xc0052eabb0) (0xc002215e00) Stream removed, broadcasting: 1 I0124 00:27:00.839431 8 log.go:172] (0xc0052eabb0) (0xc0021d6140) Stream removed, broadcasting: 3 I0124 00:27:00.839439 8 log.go:172] (0xc0052eabb0) (0xc0027572c0) Stream removed, broadcasting: 5 I0124 00:27:00.839462 8 log.go:172] (0xc0052eabb0) Go away received Jan 24 00:27:00.839: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:27:00.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-7912" for this suite. • [SLOW TEST:39.049 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":278,"completed":131,"skipped":2081,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:27:00.908: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 pods, got 2 pods STEP: expected 0 rs, got 1 rs STEP: Gathering metrics W0124 00:27:01.831789 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 24 00:27:01.831: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:27:01.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3108" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":278,"completed":132,"skipped":2094,"failed":0} SSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:27:01.850: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jan 24 00:27:03.992: INFO: Number of nodes with available pods: 0 Jan 24 00:27:03.992: INFO: Node jerma-node is running more than one daemon pod Jan 24 00:27:07.436: INFO: Number of nodes with available pods: 0 Jan 24 00:27:07.436: INFO: Node jerma-node is running more than one daemon pod Jan 24 00:27:08.593: INFO: Number of nodes with available pods: 0 Jan 24 00:27:08.593: INFO: Node jerma-node is running more than one daemon pod Jan 24 00:27:09.166: INFO: Number of nodes with available pods: 0 Jan 24 00:27:09.166: INFO: Node jerma-node is running more than one daemon pod Jan 24 00:27:10.575: INFO: Number of nodes with available pods: 0 Jan 24 00:27:10.575: INFO: Node jerma-node is running more than one daemon pod Jan 24 00:27:11.155: INFO: Number of nodes with available pods: 0 Jan 24 00:27:11.155: INFO: Node jerma-node is running more than one daemon pod Jan 24 00:27:12.004: INFO: Number of nodes with available pods: 0 Jan 24 00:27:12.004: INFO: Node jerma-node is running more than one daemon pod Jan 24 00:27:13.169: INFO: Number of nodes with available pods: 0 Jan 24 00:27:13.170: INFO: Node jerma-node is running more than one daemon pod Jan 24 00:27:14.432: INFO: Number of nodes with available pods: 0 Jan 24 00:27:14.432: INFO: Node jerma-node is running more than one daemon pod Jan 24 00:27:15.247: INFO: Number of nodes with available pods: 0 Jan 24 00:27:15.247: INFO: Node jerma-node is running more than one daemon pod Jan 24 00:27:16.003: INFO: Number of nodes with available pods: 0 Jan 24 00:27:16.003: INFO: Node jerma-node is running more than one daemon pod Jan 24 00:27:17.009: INFO: Number of nodes with available pods: 0 Jan 24 00:27:17.009: INFO: Node jerma-node is running more than one daemon pod Jan 24 00:27:18.011: INFO: Number of nodes with available pods: 0 Jan 24 00:27:18.011: INFO: Node jerma-node is running more than one daemon pod Jan 24 00:27:19.028: INFO: Number of nodes with available pods: 2 Jan 24 00:27:19.028: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Jan 24 00:27:19.144: INFO: Number of nodes with available pods: 2 Jan 24 00:27:19.144: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7167, will wait for the garbage collector to delete the pods Jan 24 00:27:20.320: INFO: Deleting DaemonSet.extensions daemon-set took: 23.433343ms Jan 24 00:27:20.720: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.345076ms Jan 24 00:27:27.026: INFO: Number of nodes with available pods: 0 Jan 24 00:27:27.026: INFO: Number of running nodes: 0, number of available pods: 0 Jan 24 00:27:27.032: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7167/daemonsets","resourceVersion":"3916238"},"items":null} Jan 24 00:27:27.034: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7167/pods","resourceVersion":"3916238"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:27:27.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7167" for this suite. • [SLOW TEST:25.207 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":278,"completed":133,"skipped":2097,"failed":0} SS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:27:27.058: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name projected-configmap-test-volume-map-9d868f3b-da7a-40a3-9cf4-536474621518 STEP: Creating a pod to test consume configMaps Jan 24 00:27:27.197: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ebb1a214-d382-4bab-bfad-8f1be892e7f7" in namespace "projected-8612" to be "success or failure" Jan 24 00:27:27.200: INFO: Pod "pod-projected-configmaps-ebb1a214-d382-4bab-bfad-8f1be892e7f7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.082795ms Jan 24 00:27:29.206: INFO: Pod "pod-projected-configmaps-ebb1a214-d382-4bab-bfad-8f1be892e7f7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008851805s Jan 24 00:27:31.212: INFO: Pod "pod-projected-configmaps-ebb1a214-d382-4bab-bfad-8f1be892e7f7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.014898847s Jan 24 00:27:33.216: INFO: Pod "pod-projected-configmaps-ebb1a214-d382-4bab-bfad-8f1be892e7f7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.019268742s Jan 24 00:27:35.227: INFO: Pod "pod-projected-configmaps-ebb1a214-d382-4bab-bfad-8f1be892e7f7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.029523019s STEP: Saw pod success Jan 24 00:27:35.227: INFO: Pod "pod-projected-configmaps-ebb1a214-d382-4bab-bfad-8f1be892e7f7" satisfied condition "success or failure" Jan 24 00:27:35.232: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-ebb1a214-d382-4bab-bfad-8f1be892e7f7 container projected-configmap-volume-test: STEP: delete the pod Jan 24 00:27:35.278: INFO: Waiting for pod pod-projected-configmaps-ebb1a214-d382-4bab-bfad-8f1be892e7f7 to disappear Jan 24 00:27:35.294: INFO: Pod pod-projected-configmaps-ebb1a214-d382-4bab-bfad-8f1be892e7f7 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:27:35.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8612" for this suite. • [SLOW TEST:8.292 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":134,"skipped":2099,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:27:35.351: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:172 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating server pod server in namespace prestop-5771 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-5771 STEP: Deleting pre-stop pod Jan 24 00:27:58.523: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:27:58.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-5771" for this suite. • [SLOW TEST:23.255 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":278,"completed":135,"skipped":2122,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:27:58.607: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating projection with configMap that has name projected-configmap-test-upd-4902e852-a9b3-45dc-8a1c-1cb83929b306 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-4902e852-a9b3-45dc-8a1c-1cb83929b306 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:28:11.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9797" for this suite. • [SLOW TEST:12.468 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":136,"skipped":2130,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:28:11.075: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 24 00:28:11.707: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 24 00:28:13.729: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715422491, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715422491, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715422491, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715422491, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 24 00:28:15.735: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715422491, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715422491, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715422491, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715422491, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 24 00:28:17.750: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715422491, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715422491, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715422491, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715422491, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 24 00:28:19.734: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715422491, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715422491, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715422491, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715422491, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 24 00:28:21.734: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715422491, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715422491, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715422491, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715422491, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 24 00:28:24.872: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:28:25.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2515" for this suite. STEP: Destroying namespace "webhook-2515-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:14.150 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":278,"completed":137,"skipped":2132,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:28:25.227: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Jan 24 00:28:25.291: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Jan 24 00:28:35.617: INFO: >>> kubeConfig: /root/.kube/config Jan 24 00:28:38.206: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:28:49.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6359" for this suite. • [SLOW TEST:24.696 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":278,"completed":138,"skipped":2173,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:28:49.924: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 24 00:28:51.007: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 24 00:28:53.021: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715422531, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715422531, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715422531, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715422530, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 24 00:28:55.027: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715422531, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715422531, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715422531, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715422530, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 24 00:28:57.027: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715422531, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715422531, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715422531, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715422530, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 24 00:29:00.114: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:29:00.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-619" for this suite. STEP: Destroying namespace "webhook-619-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:11.028 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":278,"completed":139,"skipped":2188,"failed":0} SS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:29:00.953: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating pod liveness-6afb79a7-7fe2-4e3c-8e12-d55d6718c752 in namespace container-probe-3777 Jan 24 00:29:13.124: INFO: Started pod liveness-6afb79a7-7fe2-4e3c-8e12-d55d6718c752 in namespace container-probe-3777 STEP: checking the pod's current state and verifying that restartCount is present Jan 24 00:29:13.129: INFO: Initial restart count of pod liveness-6afb79a7-7fe2-4e3c-8e12-d55d6718c752 is 0 Jan 24 00:29:41.270: INFO: Restart count of pod container-probe-3777/liveness-6afb79a7-7fe2-4e3c-8e12-d55d6718c752 is now 1 (28.141817027s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:29:41.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3777" for this suite. • [SLOW TEST:40.402 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":140,"skipped":2190,"failed":0} S ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:29:41.355: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:30:19.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-9187" for this suite. • [SLOW TEST:38.155 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":278,"completed":141,"skipped":2191,"failed":0} [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:30:19.511: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Jan 24 00:30:20.668: INFO: Pod name wrapped-volume-race-785f2c03-0bdf-4fda-bf27-d0267283fa00: Found 0 pods out of 5 Jan 24 00:30:25.683: INFO: Pod name wrapped-volume-race-785f2c03-0bdf-4fda-bf27-d0267283fa00: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-785f2c03-0bdf-4fda-bf27-d0267283fa00 in namespace emptydir-wrapper-3833, will wait for the garbage collector to delete the pods Jan 24 00:30:53.980: INFO: Deleting ReplicationController wrapped-volume-race-785f2c03-0bdf-4fda-bf27-d0267283fa00 took: 33.711823ms Jan 24 00:30:54.481: INFO: Terminating ReplicationController wrapped-volume-race-785f2c03-0bdf-4fda-bf27-d0267283fa00 pods took: 500.303807ms STEP: Creating RC which spawns configmap-volume pods Jan 24 00:31:12.724: INFO: Pod name wrapped-volume-race-d016eb06-2844-4320-af3c-1daf293e2e44: Found 0 pods out of 5 Jan 24 00:31:17.736: INFO: Pod name wrapped-volume-race-d016eb06-2844-4320-af3c-1daf293e2e44: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-d016eb06-2844-4320-af3c-1daf293e2e44 in namespace emptydir-wrapper-3833, will wait for the garbage collector to delete the pods Jan 24 00:31:50.047: INFO: Deleting ReplicationController wrapped-volume-race-d016eb06-2844-4320-af3c-1daf293e2e44 took: 75.60536ms Jan 24 00:31:50.548: INFO: Terminating ReplicationController wrapped-volume-race-d016eb06-2844-4320-af3c-1daf293e2e44 pods took: 500.466616ms STEP: Creating RC which spawns configmap-volume pods Jan 24 00:32:12.497: INFO: Pod name wrapped-volume-race-9a853767-0b49-4acf-9acf-12bad330c959: Found 0 pods out of 5 Jan 24 00:32:17.506: INFO: Pod name wrapped-volume-race-9a853767-0b49-4acf-9acf-12bad330c959: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-9a853767-0b49-4acf-9acf-12bad330c959 in namespace emptydir-wrapper-3833, will wait for the garbage collector to delete the pods Jan 24 00:32:51.642: INFO: Deleting ReplicationController wrapped-volume-race-9a853767-0b49-4acf-9acf-12bad330c959 took: 35.883013ms Jan 24 00:32:52.143: INFO: Terminating ReplicationController wrapped-volume-race-9a853767-0b49-4acf-9acf-12bad330c959 pods took: 500.293782ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:33:13.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-3833" for this suite. • [SLOW TEST:173.896 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":278,"completed":142,"skipped":2191,"failed":0} SS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:33:13.408: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:34:11.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8781" for this suite. • [SLOW TEST:57.676 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":278,"completed":143,"skipped":2193,"failed":0} [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:34:11.084: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:73 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Jan 24 00:34:11.163: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Jan 24 00:34:11.175: INFO: Pod name sample-pod: Found 0 pods out of 1 Jan 24 00:34:16.182: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jan 24 00:34:20.230: INFO: Creating deployment "test-rolling-update-deployment" Jan 24 00:34:20.239: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Jan 24 00:34:20.253: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Jan 24 00:34:22.262: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Jan 24 00:34:22.265: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715422860, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715422860, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715422860, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715422860, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 24 00:34:24.269: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715422860, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715422860, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715422860, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715422860, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 24 00:34:26.272: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715422860, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715422860, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715422860, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715422860, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 24 00:34:28.270: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:67 Jan 24 00:34:28.285: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-3628 /apis/apps/v1/namespaces/deployment-3628/deployments/test-rolling-update-deployment dbd1d5bc-7239-4d06-93cc-bd780037b367 3918597 1 2020-01-24 00:34:20 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0037b2228 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-01-24 00:34:20 +0000 UTC,LastTransitionTime:2020-01-24 00:34:20 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-67cf4f6444" has successfully progressed.,LastUpdateTime:2020-01-24 00:34:27 +0000 UTC,LastTransitionTime:2020-01-24 00:34:20 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Jan 24 00:34:28.289: INFO: New ReplicaSet "test-rolling-update-deployment-67cf4f6444" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-67cf4f6444 deployment-3628 /apis/apps/v1/namespaces/deployment-3628/replicasets/test-rolling-update-deployment-67cf4f6444 52193658-9725-4895-ae79-de541d0590f1 3918584 1 2020-01-24 00:34:20 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment dbd1d5bc-7239-4d06-93cc-bd780037b367 0xc002f8ad27 0xc002f8ad28}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 67cf4f6444,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002f8ad98 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jan 24 00:34:28.289: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Jan 24 00:34:28.289: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-3628 /apis/apps/v1/namespaces/deployment-3628/replicasets/test-rolling-update-controller 22e27be0-8059-461e-80a2-01ebc273ff89 3918596 2 2020-01-24 00:34:11 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment dbd1d5bc-7239-4d06-93cc-bd780037b367 0xc002f8ac57 0xc002f8ac58}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002f8acb8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 24 00:34:28.293: INFO: Pod "test-rolling-update-deployment-67cf4f6444-j8pzk" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-67cf4f6444-j8pzk test-rolling-update-deployment-67cf4f6444- deployment-3628 /api/v1/namespaces/deployment-3628/pods/test-rolling-update-deployment-67cf4f6444-j8pzk 895d46e2-559d-4d03-8b60-b8d2a15cfb3b 3918583 0 2020-01-24 00:34:20 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-67cf4f6444 52193658-9725-4895-ae79-de541d0590f1 0xc002f8b1e7 0xc002f8b1e8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wv55k,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wv55k,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wv55k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-24 00:34:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-24 00:34:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-24 00:34:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-24 00:34:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.2,StartTime:2020-01-24 00:34:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-24 00:34:26 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:docker://18b0489f504566e6375ef5e2d7b01cc595f6e29c4656e05f713659162a581669,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.2,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:34:28.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3628" for this suite. • [SLOW TEST:17.222 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":144,"skipped":2193,"failed":0} SSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:34:28.307: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Jan 24 00:34:28.399: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:34:52.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-316" for this suite. • [SLOW TEST:24.109 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":278,"completed":145,"skipped":2197,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:34:52.416: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:46 [It] should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:34:52.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-1669" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":278,"completed":146,"skipped":2246,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:34:52.553: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Jan 24 00:35:01.158: INFO: Successfully updated pod "pod-update-activedeadlineseconds-e4da9520-4d66-4d60-9883-e2a046c53c33" Jan 24 00:35:01.158: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-e4da9520-4d66-4d60-9883-e2a046c53c33" in namespace "pods-8100" to be "terminated due to deadline exceeded" Jan 24 00:35:01.164: INFO: Pod "pod-update-activedeadlineseconds-e4da9520-4d66-4d60-9883-e2a046c53c33": Phase="Running", Reason="", readiness=true. Elapsed: 6.706773ms Jan 24 00:35:03.174: INFO: Pod "pod-update-activedeadlineseconds-e4da9520-4d66-4d60-9883-e2a046c53c33": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.016059819s Jan 24 00:35:03.174: INFO: Pod "pod-update-activedeadlineseconds-e4da9520-4d66-4d60-9883-e2a046c53c33" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:35:03.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8100" for this suite. • [SLOW TEST:10.633 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":278,"completed":147,"skipped":2273,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:35:03.188: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-bvsb7 in namespace proxy-8543 I0124 00:35:03.475994 8 runners.go:189] Created replication controller with name: proxy-service-bvsb7, namespace: proxy-8543, replica count: 1 I0124 00:35:04.526955 8 runners.go:189] proxy-service-bvsb7 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0124 00:35:05.527315 8 runners.go:189] proxy-service-bvsb7 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0124 00:35:06.527882 8 runners.go:189] proxy-service-bvsb7 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0124 00:35:07.528214 8 runners.go:189] proxy-service-bvsb7 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0124 00:35:08.528697 8 runners.go:189] proxy-service-bvsb7 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0124 00:35:09.528967 8 runners.go:189] proxy-service-bvsb7 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0124 00:35:10.529464 8 runners.go:189] proxy-service-bvsb7 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0124 00:35:11.529847 8 runners.go:189] proxy-service-bvsb7 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0124 00:35:12.530248 8 runners.go:189] proxy-service-bvsb7 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0124 00:35:13.530631 8 runners.go:189] proxy-service-bvsb7 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0124 00:35:14.531032 8 runners.go:189] proxy-service-bvsb7 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0124 00:35:15.531347 8 runners.go:189] proxy-service-bvsb7 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0124 00:35:16.531706 8 runners.go:189] proxy-service-bvsb7 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 24 00:35:16.538: INFO: setup took 13.222359378s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Jan 24 00:35:16.566: INFO: (0) /api/v1/namespaces/proxy-8543/pods/http:proxy-service-bvsb7-zw877:160/proxy/: foo (200; 28.220708ms) Jan 24 00:35:16.570: INFO: (0) /api/v1/namespaces/proxy-8543/services/http:proxy-service-bvsb7:portname1/proxy/: foo (200; 31.910595ms) Jan 24 00:35:16.571: INFO: (0) /api/v1/namespaces/proxy-8543/pods/proxy-service-bvsb7-zw877:160/proxy/: foo (200; 32.588072ms) Jan 24 00:35:16.576: INFO: (0) /api/v1/namespaces/proxy-8543/services/proxy-service-bvsb7:portname1/proxy/: foo (200; 37.538713ms) Jan 24 00:35:16.576: INFO: (0) /api/v1/namespaces/proxy-8543/pods/http:proxy-service-bvsb7-zw877:162/proxy/: bar (200; 37.141288ms) Jan 24 00:35:16.581: INFO: (0) /api/v1/namespaces/proxy-8543/pods/proxy-service-bvsb7-zw877/proxy/: test (200; 43.195033ms) Jan 24 00:35:16.587: INFO: (0) /api/v1/namespaces/proxy-8543/pods/proxy-service-bvsb7-zw877:162/proxy/: bar (200; 48.2812ms) Jan 24 00:35:16.587: INFO: (0) /api/v1/namespaces/proxy-8543/services/proxy-service-bvsb7:portname2/proxy/: bar (200; 48.253462ms) Jan 24 00:35:16.587: INFO: (0) /api/v1/namespaces/proxy-8543/services/http:proxy-service-bvsb7:portname2/proxy/: bar (200; 48.16933ms) Jan 24 00:35:16.587: INFO: (0) /api/v1/namespaces/proxy-8543/pods/http:proxy-service-bvsb7-zw877:1080/proxy/: ... (200; 48.089774ms) Jan 24 00:35:16.589: INFO: (0) /api/v1/namespaces/proxy-8543/pods/proxy-service-bvsb7-zw877:1080/proxy/: test<... (200; 50.421314ms) Jan 24 00:35:16.596: INFO: (0) /api/v1/namespaces/proxy-8543/services/https:proxy-service-bvsb7:tlsportname1/proxy/: tls baz (200; 57.215181ms) Jan 24 00:35:16.597: INFO: (0) /api/v1/namespaces/proxy-8543/pods/https:proxy-service-bvsb7-zw877:462/proxy/: tls qux (200; 58.2738ms) Jan 24 00:35:16.597: INFO: (0) /api/v1/namespaces/proxy-8543/pods/https:proxy-service-bvsb7-zw877:460/proxy/: tls baz (200; 58.175593ms) Jan 24 00:35:16.597: INFO: (0) /api/v1/namespaces/proxy-8543/services/https:proxy-service-bvsb7:tlsportname2/proxy/: tls qux (200; 58.497883ms) Jan 24 00:35:16.597: INFO: (0) /api/v1/namespaces/proxy-8543/pods/https:proxy-service-bvsb7-zw877:443/proxy/: test<... (200; 18.679956ms) Jan 24 00:35:16.616: INFO: (1) /api/v1/namespaces/proxy-8543/pods/proxy-service-bvsb7-zw877:162/proxy/: bar (200; 18.37165ms) Jan 24 00:35:16.617: INFO: (1) /api/v1/namespaces/proxy-8543/pods/http:proxy-service-bvsb7-zw877:162/proxy/: bar (200; 19.006574ms) Jan 24 00:35:16.623: INFO: (1) /api/v1/namespaces/proxy-8543/services/http:proxy-service-bvsb7:portname2/proxy/: bar (200; 25.067033ms) Jan 24 00:35:16.624: INFO: (1) /api/v1/namespaces/proxy-8543/pods/https:proxy-service-bvsb7-zw877:443/proxy/: ... (200; 26.714211ms) Jan 24 00:35:16.625: INFO: (1) /api/v1/namespaces/proxy-8543/pods/proxy-service-bvsb7-zw877:160/proxy/: foo (200; 27.152615ms) Jan 24 00:35:16.625: INFO: (1) /api/v1/namespaces/proxy-8543/services/https:proxy-service-bvsb7:tlsportname2/proxy/: tls qux (200; 27.708343ms) Jan 24 00:35:16.626: INFO: (1) /api/v1/namespaces/proxy-8543/pods/proxy-service-bvsb7-zw877/proxy/: test (200; 27.757379ms) Jan 24 00:35:16.626: INFO: (1) /api/v1/namespaces/proxy-8543/pods/https:proxy-service-bvsb7-zw877:462/proxy/: tls qux (200; 27.871925ms) Jan 24 00:35:16.626: INFO: (1) /api/v1/namespaces/proxy-8543/services/http:proxy-service-bvsb7:portname1/proxy/: foo (200; 28.957186ms) Jan 24 00:35:16.626: INFO: (1) /api/v1/namespaces/proxy-8543/services/proxy-service-bvsb7:portname2/proxy/: bar (200; 28.766691ms) Jan 24 00:35:16.626: INFO: (1) /api/v1/namespaces/proxy-8543/services/proxy-service-bvsb7:portname1/proxy/: foo (200; 28.839644ms) Jan 24 00:35:16.641: INFO: (2) /api/v1/namespaces/proxy-8543/pods/http:proxy-service-bvsb7-zw877:162/proxy/: bar (200; 14.417017ms) Jan 24 00:35:16.641: INFO: (2) /api/v1/namespaces/proxy-8543/pods/proxy-service-bvsb7-zw877:1080/proxy/: test<... (200; 14.600611ms) Jan 24 00:35:16.644: INFO: (2) /api/v1/namespaces/proxy-8543/services/http:proxy-service-bvsb7:portname1/proxy/: foo (200; 17.399499ms) Jan 24 00:35:16.644: INFO: (2) /api/v1/namespaces/proxy-8543/pods/http:proxy-service-bvsb7-zw877:160/proxy/: foo (200; 17.128378ms) Jan 24 00:35:16.644: INFO: (2) /api/v1/namespaces/proxy-8543/pods/proxy-service-bvsb7-zw877/proxy/: test (200; 17.346828ms) Jan 24 00:35:16.644: INFO: (2) /api/v1/namespaces/proxy-8543/services/proxy-service-bvsb7:portname1/proxy/: foo (200; 17.11905ms) Jan 24 00:35:16.644: INFO: (2) /api/v1/namespaces/proxy-8543/pods/proxy-service-bvsb7-zw877:160/proxy/: foo (200; 17.324373ms) Jan 24 00:35:16.644: INFO: (2) /api/v1/namespaces/proxy-8543/services/proxy-service-bvsb7:portname2/proxy/: bar (200; 17.24013ms) Jan 24 00:35:16.644: INFO: (2) /api/v1/namespaces/proxy-8543/pods/proxy-service-bvsb7-zw877:162/proxy/: bar (200; 17.31811ms) Jan 24 00:35:16.645: INFO: (2) /api/v1/namespaces/proxy-8543/pods/http:proxy-service-bvsb7-zw877:1080/proxy/: ... (200; 18.404165ms) Jan 24 00:35:16.645: INFO: (2) /api/v1/namespaces/proxy-8543/services/https:proxy-service-bvsb7:tlsportname1/proxy/: tls baz (200; 18.250336ms) Jan 24 00:35:16.645: INFO: (2) /api/v1/namespaces/proxy-8543/services/http:proxy-service-bvsb7:portname2/proxy/: bar (200; 18.320351ms) Jan 24 00:35:16.645: INFO: (2) /api/v1/namespaces/proxy-8543/pods/https:proxy-service-bvsb7-zw877:443/proxy/: ... (200; 10.92523ms) Jan 24 00:35:16.659: INFO: (3) /api/v1/namespaces/proxy-8543/services/http:proxy-service-bvsb7:portname2/proxy/: bar (200; 12.259793ms) Jan 24 00:35:16.659: INFO: (3) /api/v1/namespaces/proxy-8543/services/proxy-service-bvsb7:portname2/proxy/: bar (200; 12.481122ms) Jan 24 00:35:16.659: INFO: (3) /api/v1/namespaces/proxy-8543/pods/https:proxy-service-bvsb7-zw877:462/proxy/: tls qux (200; 12.024649ms) Jan 24 00:35:16.660: INFO: (3) /api/v1/namespaces/proxy-8543/pods/proxy-service-bvsb7-zw877:162/proxy/: bar (200; 12.509383ms) Jan 24 00:35:16.660: INFO: (3) /api/v1/namespaces/proxy-8543/pods/https:proxy-service-bvsb7-zw877:460/proxy/: tls baz (200; 13.719314ms) Jan 24 00:35:16.660: INFO: (3) /api/v1/namespaces/proxy-8543/pods/proxy-service-bvsb7-zw877:160/proxy/: foo (200; 12.847711ms) Jan 24 00:35:16.661: INFO: (3) /api/v1/namespaces/proxy-8543/services/proxy-service-bvsb7:portname1/proxy/: foo (200; 14.390638ms) Jan 24 00:35:16.661: INFO: (3) /api/v1/namespaces/proxy-8543/services/https:proxy-service-bvsb7:tlsportname1/proxy/: tls baz (200; 14.214509ms) Jan 24 00:35:16.661: INFO: (3) /api/v1/namespaces/proxy-8543/services/http:proxy-service-bvsb7:portname1/proxy/: foo (200; 14.221761ms) Jan 24 00:35:16.661: INFO: (3) /api/v1/namespaces/proxy-8543/pods/http:proxy-service-bvsb7-zw877:162/proxy/: bar (200; 13.989514ms) Jan 24 00:35:16.661: INFO: (3) /api/v1/namespaces/proxy-8543/pods/proxy-service-bvsb7-zw877/proxy/: test (200; 14.08598ms) Jan 24 00:35:16.662: INFO: (3) /api/v1/namespaces/proxy-8543/pods/proxy-service-bvsb7-zw877:1080/proxy/: test<... (200; 15.363081ms) Jan 24 00:35:16.671: INFO: (4) /api/v1/namespaces/proxy-8543/pods/proxy-service-bvsb7-zw877:1080/proxy/: test<... (200; 8.27702ms) Jan 24 00:35:16.671: INFO: (4) /api/v1/namespaces/proxy-8543/pods/http:proxy-service-bvsb7-zw877:160/proxy/: foo (200; 8.675453ms) Jan 24 00:35:16.671: INFO: (4) /api/v1/namespaces/proxy-8543/pods/proxy-service-bvsb7-zw877:162/proxy/: bar (200; 8.759298ms) Jan 24 00:35:16.671: INFO: (4) /api/v1/namespaces/proxy-8543/pods/http:proxy-service-bvsb7-zw877:1080/proxy/: ... (200; 8.749336ms) Jan 24 00:35:16.671: INFO: (4) /api/v1/namespaces/proxy-8543/pods/https:proxy-service-bvsb7-zw877:460/proxy/: tls baz (200; 8.974228ms) Jan 24 00:35:16.672: INFO: (4) /api/v1/namespaces/proxy-8543/pods/http:proxy-service-bvsb7-zw877:162/proxy/: bar (200; 9.422204ms) Jan 24 00:35:16.672: INFO: (4) /api/v1/namespaces/proxy-8543/pods/proxy-service-bvsb7-zw877:160/proxy/: foo (200; 9.512335ms) Jan 24 00:35:16.672: INFO: (4) /api/v1/namespaces/proxy-8543/pods/proxy-service-bvsb7-zw877/proxy/: test (200; 9.5301ms) Jan 24 00:35:16.672: INFO: (4) /api/v1/namespaces/proxy-8543/pods/https:proxy-service-bvsb7-zw877:443/proxy/: test (200; 9.875806ms) Jan 24 00:35:16.685: INFO: (5) /api/v1/namespaces/proxy-8543/services/https:proxy-service-bvsb7:tlsportname2/proxy/: tls qux (200; 10.036647ms) Jan 24 00:35:16.687: INFO: (5) /api/v1/namespaces/proxy-8543/pods/http:proxy-service-bvsb7-zw877:160/proxy/: foo (200; 11.155246ms) Jan 24 00:35:16.687: INFO: (5) /api/v1/namespaces/proxy-8543/pods/proxy-service-bvsb7-zw877:162/proxy/: bar (200; 11.241367ms) Jan 24 00:35:16.687: INFO: (5) /api/v1/namespaces/proxy-8543/pods/proxy-service-bvsb7-zw877:1080/proxy/: test<... (200; 11.223531ms) Jan 24 00:35:16.687: INFO: (5) /api/v1/namespaces/proxy-8543/services/http:proxy-service-bvsb7:portname2/proxy/: bar (200; 11.347331ms) Jan 24 00:35:16.687: INFO: (5) /api/v1/namespaces/proxy-8543/services/http:proxy-service-bvsb7:portname1/proxy/: foo (200; 11.370655ms) Jan 24 00:35:16.687: INFO: (5) /api/v1/namespaces/proxy-8543/services/https:proxy-service-bvsb7:tlsportname1/proxy/: tls baz (200; 11.333608ms) Jan 24 00:35:16.687: INFO: (5) /api/v1/namespaces/proxy-8543/pods/http:proxy-service-bvsb7-zw877:1080/proxy/: ... (200; 11.434525ms) Jan 24 00:35:16.688: INFO: (5) /api/v1/namespaces/proxy-8543/pods/http:proxy-service-bvsb7-zw877:162/proxy/: bar (200; 12.82793ms) Jan 24 00:35:16.688: INFO: (5) /api/v1/namespaces/proxy-8543/pods/https:proxy-service-bvsb7-zw877:460/proxy/: tls baz (200; 12.850263ms) Jan 24 00:35:16.688: INFO: (5) /api/v1/namespaces/proxy-8543/services/proxy-service-bvsb7:portname2/proxy/: bar (200; 12.909829ms) Jan 24 00:35:16.695: INFO: (6) /api/v1/namespaces/proxy-8543/pods/proxy-service-bvsb7-zw877:162/proxy/: bar (200; 6.185163ms) Jan 24 00:35:16.695: INFO: (6) /api/v1/namespaces/proxy-8543/pods/http:proxy-service-bvsb7-zw877:162/proxy/: bar (200; 6.274913ms) Jan 24 00:35:16.695: INFO: (6) /api/v1/namespaces/proxy-8543/pods/https:proxy-service-bvsb7-zw877:462/proxy/: tls qux (200; 6.332552ms) Jan 24 00:35:16.695: INFO: (6) /api/v1/namespaces/proxy-8543/pods/https:proxy-service-bvsb7-zw877:443/proxy/: test (200; 7.601452ms) Jan 24 00:35:16.696: INFO: (6) /api/v1/namespaces/proxy-8543/pods/proxy-service-bvsb7-zw877:160/proxy/: foo (200; 7.873036ms) Jan 24 00:35:16.696: INFO: (6) /api/v1/namespaces/proxy-8543/pods/http:proxy-service-bvsb7-zw877:1080/proxy/: ... (200; 8.033137ms) Jan 24 00:35:16.697: INFO: (6) /api/v1/namespaces/proxy-8543/pods/proxy-service-bvsb7-zw877:1080/proxy/: test<... (200; 8.12599ms) Jan 24 00:35:16.697: INFO: (6) /api/v1/namespaces/proxy-8543/pods/http:proxy-service-bvsb7-zw877:160/proxy/: foo (200; 8.293061ms) Jan 24 00:35:16.698: INFO: (6) /api/v1/namespaces/proxy-8543/pods/https:proxy-service-bvsb7-zw877:460/proxy/: tls baz (200; 9.50132ms) Jan 24 00:35:16.700: INFO: (6) /api/v1/namespaces/proxy-8543/services/http:proxy-service-bvsb7:portname2/proxy/: bar (200; 11.734543ms) Jan 24 00:35:16.700: INFO: (6) /api/v1/namespaces/proxy-8543/services/proxy-service-bvsb7:portname2/proxy/: bar (200; 12.022698ms) Jan 24 00:35:16.701: INFO: (6) /api/v1/namespaces/proxy-8543/services/https:proxy-service-bvsb7:tlsportname2/proxy/: tls qux (200; 12.13347ms) Jan 24 00:35:16.701: INFO: (6) /api/v1/namespaces/proxy-8543/services/https:proxy-service-bvsb7:tlsportname1/proxy/: tls baz (200; 12.345717ms) Jan 24 00:35:16.701: INFO: (6) /api/v1/namespaces/proxy-8543/services/http:proxy-service-bvsb7:portname1/proxy/: foo (200; 12.360272ms) Jan 24 00:35:16.701: INFO: (6) /api/v1/namespaces/proxy-8543/services/proxy-service-bvsb7:portname1/proxy/: foo (200; 12.637391ms) Jan 24 00:35:16.709: INFO: (7) /api/v1/namespaces/proxy-8543/pods/https:proxy-service-bvsb7-zw877:443/proxy/: ... (200; 8.481589ms) Jan 24 00:35:16.710: INFO: (7) /api/v1/namespaces/proxy-8543/pods/proxy-service-bvsb7-zw877:162/proxy/: bar (200; 8.479753ms) Jan 24 00:35:16.715: INFO: (7) /api/v1/namespaces/proxy-8543/pods/proxy-service-bvsb7-zw877:160/proxy/: foo (200; 13.472227ms) Jan 24 00:35:16.715: INFO: (7) /api/v1/namespaces/proxy-8543/pods/https:proxy-service-bvsb7-zw877:462/proxy/: tls qux (200; 13.588695ms) Jan 24 00:35:16.715: INFO: (7) /api/v1/namespaces/proxy-8543/pods/http:proxy-service-bvsb7-zw877:160/proxy/: foo (200; 13.63061ms) Jan 24 00:35:16.717: INFO: (7) /api/v1/namespaces/proxy-8543/pods/proxy-service-bvsb7-zw877/proxy/: test (200; 16.202561ms) Jan 24 00:35:16.717: INFO: (7) /api/v1/namespaces/proxy-8543/pods/http:proxy-service-bvsb7-zw877:162/proxy/: bar (200; 16.202601ms) Jan 24 00:35:16.718: INFO: (7) /api/v1/namespaces/proxy-8543/services/proxy-service-bvsb7:portname2/proxy/: bar (200; 16.480702ms) Jan 24 00:35:16.718: INFO: (7) /api/v1/namespaces/proxy-8543/services/proxy-service-bvsb7:portname1/proxy/: foo (200; 16.492213ms) Jan 24 00:35:16.718: INFO: (7) /api/v1/namespaces/proxy-8543/services/http:proxy-service-bvsb7:portname1/proxy/: foo (200; 16.471325ms) Jan 24 00:35:16.718: INFO: (7) /api/v1/namespaces/proxy-8543/pods/proxy-service-bvsb7-zw877:1080/proxy/: test<... (200; 16.458392ms) Jan 24 00:35:16.718: INFO: (7) /api/v1/namespaces/proxy-8543/services/https:proxy-service-bvsb7:tlsportname2/proxy/: tls qux (200; 16.530242ms) Jan 24 00:35:16.719: INFO: (7) /api/v1/namespaces/proxy-8543/services/https:proxy-service-bvsb7:tlsportname1/proxy/: tls baz (200; 17.832064ms) Jan 24 00:35:16.719: INFO: (7) /api/v1/namespaces/proxy-8543/services/http:proxy-service-bvsb7:portname2/proxy/: bar (200; 17.875579ms) Jan 24 00:35:16.728: INFO: (8) /api/v1/namespaces/proxy-8543/pods/proxy-service-bvsb7-zw877:162/proxy/: bar (200; 8.886613ms) Jan 24 00:35:16.728: INFO: (8) /api/v1/namespaces/proxy-8543/pods/http:proxy-service-bvsb7-zw877:160/proxy/: foo (200; 9.016943ms) Jan 24 00:35:16.728: INFO: (8) /api/v1/namespaces/proxy-8543/pods/proxy-service-bvsb7-zw877:1080/proxy/: test<... (200; 8.917121ms) Jan 24 00:35:16.728: INFO: (8) /api/v1/namespaces/proxy-8543/pods/http:proxy-service-bvsb7-zw877:1080/proxy/: ... (200; 9.142041ms) Jan 24 00:35:16.729: INFO: (8) /api/v1/namespaces/proxy-8543/pods/proxy-service-bvsb7-zw877/proxy/: test (200; 9.468866ms) Jan 24 00:35:16.729: INFO: (8) /api/v1/namespaces/proxy-8543/pods/https:proxy-service-bvsb7-zw877:443/proxy/: ... (200; 10.315475ms) Jan 24 00:35:16.747: INFO: (9) /api/v1/namespaces/proxy-8543/services/http:proxy-service-bvsb7:portname2/proxy/: bar (200; 12.357276ms) Jan 24 00:35:16.753: INFO: (9) /api/v1/namespaces/proxy-8543/pods/proxy-service-bvsb7-zw877:1080/proxy/: test<... (200; 17.884607ms) Jan 24 00:35:16.753: INFO: (9) /api/v1/namespaces/proxy-8543/services/http:proxy-service-bvsb7:portname1/proxy/: foo (200; 18.428202ms) Jan 24 00:35:16.754: INFO: (9) /api/v1/namespaces/proxy-8543/services/https:proxy-service-bvsb7:tlsportname2/proxy/: tls qux (200; 18.924974ms) Jan 24 00:35:16.754: INFO: (9) /api/v1/namespaces/proxy-8543/pods/proxy-service-bvsb7-zw877/proxy/: test (200; 19.144035ms) Jan 24 00:35:16.754: INFO: (9) /api/v1/namespaces/proxy-8543/pods/proxy-service-bvsb7-zw877:162/proxy/: bar (200; 19.416585ms) Jan 24 00:35:16.754: INFO: (9) /api/v1/namespaces/proxy-8543/pods/https:proxy-service-bvsb7-zw877:443/proxy/: ... (200; 3.926049ms) Jan 24 00:35:16.762: INFO: (10) /api/v1/namespaces/proxy-8543/pods/proxy-service-bvsb7-zw877:162/proxy/: bar (200; 6.6049ms) Jan 24 00:35:16.765: INFO: (10) /api/v1/namespaces/proxy-8543/pods/proxy-service-bvsb7-zw877:160/proxy/: foo (200; 9.419604ms) Jan 24 00:35:16.765: INFO: (10) /api/v1/namespaces/proxy-8543/services/https:proxy-service-bvsb7:tlsportname1/proxy/: tls baz (200; 9.6153ms) Jan 24 00:35:16.767: INFO: (10) /api/v1/namespaces/proxy-8543/pods/https:proxy-service-bvsb7-zw877:462/proxy/: tls qux (200; 10.989048ms) Jan 24 00:35:16.767: INFO: (10) /api/v1/namespaces/proxy-8543/pods/proxy-service-bvsb7-zw877:1080/proxy/: test<... (200; 10.996818ms) Jan 24 00:35:16.767: INFO: (10) /api/v1/namespaces/proxy-8543/pods/https:proxy-service-bvsb7-zw877:460/proxy/: tls baz (200; 10.988458ms) Jan 24 00:35:16.767: INFO: (10) /api/v1/namespaces/proxy-8543/pods/http:proxy-service-bvsb7-zw877:162/proxy/: bar (200; 11.084829ms) Jan 24 00:35:16.767: INFO: (10) /api/v1/namespaces/proxy-8543/pods/proxy-service-bvsb7-zw877/proxy/: test (200; 11.08375ms) Jan 24 00:35:16.767: INFO: (10) /api/v1/namespaces/proxy-8543/pods/http:proxy-service-bvsb7-zw877:160/proxy/: foo (200; 11.094333ms) Jan 24 00:35:16.767: INFO: (10) /api/v1/namespaces/proxy-8543/services/http:proxy-service-bvsb7:portname1/proxy/: foo (200; 11.102299ms) Jan 24 00:35:16.767: INFO: (10) /api/v1/namespaces/proxy-8543/pods/https:proxy-service-bvsb7-zw877:443/proxy/: test (200; 8.192579ms) Jan 24 00:35:16.776: INFO: (11) /api/v1/namespaces/proxy-8543/services/proxy-service-bvsb7:portname2/proxy/: bar (200; 8.666ms) Jan 24 00:35:16.777: INFO: (11) /api/v1/namespaces/proxy-8543/pods/https:proxy-service-bvsb7-zw877:443/proxy/: ... (200; 9.432464ms) Jan 24 00:35:16.777: INFO: (11) /api/v1/namespaces/proxy-8543/services/https:proxy-service-bvsb7:tlsportname2/proxy/: tls qux (200; 9.401914ms) Jan 24 00:35:16.777: INFO: (11) /api/v1/namespaces/proxy-8543/pods/http:proxy-service-bvsb7-zw877:160/proxy/: foo (200; 9.439413ms) Jan 24 00:35:16.777: INFO: (11) /api/v1/namespaces/proxy-8543/pods/https:proxy-service-bvsb7-zw877:462/proxy/: tls qux (200; 9.387167ms) Jan 24 00:35:16.777: INFO: (11) /api/v1/namespaces/proxy-8543/pods/proxy-service-bvsb7-zw877:1080/proxy/: test<... (200; 9.472115ms) Jan 24 00:35:16.777: INFO: (11) /api/v1/namespaces/proxy-8543/pods/https:proxy-service-bvsb7-zw877:460/proxy/: tls baz (200; 9.550068ms) Jan 24 00:35:16.778: INFO: (11) /api/v1/namespaces/proxy-8543/pods/proxy-service-bvsb7-zw877:160/proxy/: foo (200; 10.006305ms) Jan 24 00:35:16.778: INFO: (11) /api/v1/namespaces/proxy-8543/pods/proxy-service-bvsb7-zw877:162/proxy/: bar (200; 10.080615ms) Jan 24 00:35:16.778: INFO: (11) /api/v1/namespaces/proxy-8543/services/http:proxy-service-bvsb7:portname2/proxy/: bar (200; 10.365813ms) Jan 24 00:35:16.778: INFO: (11) /api/v1/namespaces/proxy-8543/pods/http:proxy-service-bvsb7-zw877:162/proxy/: bar (200; 10.298677ms) Jan 24 00:35:16.779: INFO: (11) /api/v1/namespaces/proxy-8543/services/http:proxy-service-bvsb7:portname1/proxy/: foo (200; 10.88738ms) Jan 24 00:35:16.785: INFO: (12) /api/v1/namespaces/proxy-8543/pods/proxy-service-bvsb7-zw877:160/proxy/: foo (200; 6.486235ms) Jan 24 00:35:16.786: INFO: (12) /api/v1/namespaces/proxy-8543/pods/http:proxy-service-bvsb7-zw877:1080/proxy/: ... (200; 7.037106ms) Jan 24 00:35:16.786: INFO: (12) /api/v1/namespaces/proxy-8543/pods/proxy-service-bvsb7-zw877:162/proxy/: bar (200; 7.313354ms) Jan 24 00:35:16.786: INFO: (12) /api/v1/namespaces/proxy-8543/pods/proxy-service-bvsb7-zw877/proxy/: test (200; 7.491757ms) Jan 24 00:35:16.787: INFO: (12) /api/v1/namespaces/proxy-8543/pods/https:proxy-service-bvsb7-zw877:460/proxy/: tls baz (200; 8.24985ms) Jan 24 00:35:16.787: INFO: (12) /api/v1/namespaces/proxy-8543/pods/https:proxy-service-bvsb7-zw877:462/proxy/: tls qux (200; 8.441043ms) Jan 24 00:35:16.787: INFO: (12) /api/v1/namespaces/proxy-8543/pods/http:proxy-service-bvsb7-zw877:162/proxy/: bar (200; 8.537723ms) Jan 24 00:35:16.788: INFO: (12) /api/v1/namespaces/proxy-8543/pods/https:proxy-service-bvsb7-zw877:443/proxy/: test<... (200; 10.367203ms) Jan 24 00:35:16.790: INFO: (12) /api/v1/namespaces/proxy-8543/services/https:proxy-service-bvsb7:tlsportname1/proxy/: tls baz (200; 10.949101ms) Jan 24 00:35:16.790: INFO: (12) /api/v1/namespaces/proxy-8543/services/https:proxy-service-bvsb7:tlsportname2/proxy/: tls qux (200; 11.108537ms) Jan 24 00:35:16.790: INFO: (12) /api/v1/namespaces/proxy-8543/services/proxy-service-bvsb7:portname1/proxy/: foo (200; 11.26874ms) Jan 24 00:35:16.790: INFO: (12) /api/v1/namespaces/proxy-8543/services/proxy-service-bvsb7:portname2/proxy/: bar (200; 11.472248ms) Jan 24 00:35:16.790: INFO: (12) /api/v1/namespaces/proxy-8543/services/http:proxy-service-bvsb7:portname1/proxy/: foo (200; 11.852682ms) Jan 24 00:35:16.791: INFO: (12) /api/v1/namespaces/proxy-8543/services/http:proxy-service-bvsb7:portname2/proxy/: bar (200; 12.575414ms) Jan 24 00:35:16.802: INFO: (13) /api/v1/namespaces/proxy-8543/pods/proxy-service-bvsb7-zw877:160/proxy/: foo (200; 10.322227ms) Jan 24 00:35:16.802: INFO: (13) /api/v1/namespaces/proxy-8543/pods/proxy-service-bvsb7-zw877:1080/proxy/: test<... (200; 10.319764ms) Jan 24 00:35:16.802: INFO: (13) /api/v1/namespaces/proxy-8543/pods/http:proxy-service-bvsb7-zw877:162/proxy/: bar (200; 10.330743ms) Jan 24 00:35:16.802: INFO: (13) /api/v1/namespaces/proxy-8543/pods/http:proxy-service-bvsb7-zw877:1080/proxy/: ... (200; 10.443556ms) Jan 24 00:35:16.802: INFO: (13) /api/v1/namespaces/proxy-8543/pods/https:proxy-service-bvsb7-zw877:443/proxy/: test (200; 10.655956ms) Jan 24 00:35:16.802: INFO: (13) /api/v1/namespaces/proxy-8543/services/http:proxy-service-bvsb7:portname1/proxy/: foo (200; 10.644314ms) Jan 24 00:35:16.802: INFO: (13) /api/v1/namespaces/proxy-8543/pods/https:proxy-service-bvsb7-zw877:460/proxy/: tls baz (200; 10.96255ms) Jan 24 00:35:16.803: INFO: (13) /api/v1/namespaces/proxy-8543/services/proxy-service-bvsb7:portname2/proxy/: bar (200; 12.034576ms) Jan 24 00:35:16.803: INFO: (13) /api/v1/namespaces/proxy-8543/pods/http:proxy-service-bvsb7-zw877:160/proxy/: foo (200; 12.001161ms) Jan 24 00:35:16.803: INFO: (13) /api/v1/namespaces/proxy-8543/services/https:proxy-service-bvsb7:tlsportname2/proxy/: tls qux (200; 12.062201ms) Jan 24 00:35:16.803: INFO: (13) /api/v1/namespaces/proxy-8543/pods/proxy-service-bvsb7-zw877:162/proxy/: bar (200; 12.100089ms) Jan 24 00:35:16.803: INFO: (13) /api/v1/namespaces/proxy-8543/services/https:proxy-service-bvsb7:tlsportname1/proxy/: tls baz (200; 12.189241ms) Jan 24 00:35:16.804: INFO: (13) /api/v1/namespaces/proxy-8543/services/proxy-service-bvsb7:portname1/proxy/: foo (200; 12.748736ms) Jan 24 00:35:16.804: INFO: (13) /api/v1/namespaces/proxy-8543/services/http:proxy-service-bvsb7:portname2/proxy/: bar (200; 12.829683ms) Jan 24 00:35:16.810: INFO: (14) /api/v1/namespaces/proxy-8543/pods/proxy-service-bvsb7-zw877:162/proxy/: bar (200; 5.555419ms) Jan 24 00:35:16.810: INFO: (14) /api/v1/namespaces/proxy-8543/pods/proxy-service-bvsb7-zw877/proxy/: test (200; 5.647936ms) Jan 24 00:35:16.810: INFO: (14) /api/v1/namespaces/proxy-8543/pods/https:proxy-service-bvsb7-zw877:443/proxy/: test<... (200; 6.128384ms) Jan 24 00:35:16.813: INFO: (14) /api/v1/namespaces/proxy-8543/pods/http:proxy-service-bvsb7-zw877:160/proxy/: foo (200; 8.589418ms) Jan 24 00:35:16.813: INFO: (14) /api/v1/namespaces/proxy-8543/pods/http:proxy-service-bvsb7-zw877:162/proxy/: bar (200; 8.897942ms) Jan 24 00:35:16.813: INFO: (14) /api/v1/namespaces/proxy-8543/services/https:proxy-service-bvsb7:tlsportname1/proxy/: tls baz (200; 8.803786ms) Jan 24 00:35:16.814: INFO: (14) /api/v1/namespaces/proxy-8543/pods/http:proxy-service-bvsb7-zw877:1080/proxy/: ... (200; 9.083686ms) Jan 24 00:35:16.814: INFO: (14) /api/v1/namespaces/proxy-8543/pods/https:proxy-service-bvsb7-zw877:462/proxy/: tls qux (200; 9.223732ms) Jan 24 00:35:16.814: INFO: (14) /api/v1/namespaces/proxy-8543/services/proxy-service-bvsb7:portname2/proxy/: bar (200; 9.953062ms) Jan 24 00:35:16.814: INFO: (14) /api/v1/namespaces/proxy-8543/services/proxy-service-bvsb7:portname1/proxy/: foo (200; 9.777343ms) Jan 24 00:35:16.815: INFO: (14) /api/v1/namespaces/proxy-8543/services/http:proxy-service-bvsb7:portname1/proxy/: foo (200; 10.005288ms) Jan 24 00:35:16.815: INFO: (14) /api/v1/namespaces/proxy-8543/services/http:proxy-service-bvsb7:portname2/proxy/: bar (200; 9.976778ms) Jan 24 00:35:16.815: INFO: (14) /api/v1/namespaces/proxy-8543/pods/proxy-service-bvsb7-zw877:160/proxy/: foo (200; 10.915622ms) Jan 24 00:35:16.816: INFO: (14) /api/v1/namespaces/proxy-8543/services/https:proxy-service-bvsb7:tlsportname2/proxy/: tls qux (200; 10.826127ms) Jan 24 00:35:16.822: INFO: (15) /api/v1/namespaces/proxy-8543/pods/https:proxy-service-bvsb7-zw877:460/proxy/: tls baz (200; 6.328004ms) Jan 24 00:35:16.823: INFO: (15) /api/v1/namespaces/proxy-8543/pods/proxy-service-bvsb7-zw877:160/proxy/: foo (200; 7.275717ms) Jan 24 00:35:16.825: INFO: (15) /api/v1/namespaces/proxy-8543/pods/https:proxy-service-bvsb7-zw877:462/proxy/: tls qux (200; 9.362625ms) Jan 24 00:35:16.825: INFO: (15) /api/v1/namespaces/proxy-8543/pods/http:proxy-service-bvsb7-zw877:160/proxy/: foo (200; 9.848413ms) Jan 24 00:35:16.825: INFO: (15) /api/v1/namespaces/proxy-8543/services/https:proxy-service-bvsb7:tlsportname2/proxy/: tls qux (200; 9.793543ms) Jan 24 00:35:16.826: INFO: (15) /api/v1/namespaces/proxy-8543/pods/proxy-service-bvsb7-zw877:162/proxy/: bar (200; 10.227355ms) Jan 24 00:35:16.826: INFO: (15) /api/v1/namespaces/proxy-8543/services/http:proxy-service-bvsb7:portname1/proxy/: foo (200; 10.161201ms) Jan 24 00:35:16.826: INFO: (15) /api/v1/namespaces/proxy-8543/services/proxy-service-bvsb7:portname2/proxy/: bar (200; 10.290774ms) Jan 24 00:35:16.826: INFO: (15) /api/v1/namespaces/proxy-8543/pods/http:proxy-service-bvsb7-zw877:162/proxy/: bar (200; 10.234077ms) Jan 24 00:35:16.826: INFO: (15) /api/v1/namespaces/proxy-8543/pods/proxy-service-bvsb7-zw877/proxy/: test (200; 10.223895ms) Jan 24 00:35:16.827: INFO: (15) /api/v1/namespaces/proxy-8543/pods/http:proxy-service-bvsb7-zw877:1080/proxy/: ... (200; 10.977858ms) Jan 24 00:35:16.827: INFO: (15) /api/v1/namespaces/proxy-8543/pods/proxy-service-bvsb7-zw877:1080/proxy/: test<... (200; 10.989307ms) Jan 24 00:35:16.827: INFO: (15) /api/v1/namespaces/proxy-8543/services/proxy-service-bvsb7:portname1/proxy/: foo (200; 11.02388ms) Jan 24 00:35:16.827: INFO: (15) /api/v1/namespaces/proxy-8543/services/http:proxy-service-bvsb7:portname2/proxy/: bar (200; 11.558051ms) Jan 24 00:35:16.827: INFO: (15) /api/v1/namespaces/proxy-8543/services/https:proxy-service-bvsb7:tlsportname1/proxy/: tls baz (200; 11.578965ms) Jan 24 00:35:16.827: INFO: (15) /api/v1/namespaces/proxy-8543/pods/https:proxy-service-bvsb7-zw877:443/proxy/: test<... (200; 10.76352ms) Jan 24 00:35:16.838: INFO: (16) /api/v1/namespaces/proxy-8543/services/https:proxy-service-bvsb7:tlsportname1/proxy/: tls baz (200; 10.812917ms) Jan 24 00:35:16.838: INFO: (16) /api/v1/namespaces/proxy-8543/pods/https:proxy-service-bvsb7-zw877:443/proxy/: test (200; 10.890197ms) Jan 24 00:35:16.838: INFO: (16) /api/v1/namespaces/proxy-8543/services/proxy-service-bvsb7:portname1/proxy/: foo (200; 10.883916ms) Jan 24 00:35:16.838: INFO: (16) /api/v1/namespaces/proxy-8543/services/proxy-service-bvsb7:portname2/proxy/: bar (200; 10.869645ms) Jan 24 00:35:16.838: INFO: (16) /api/v1/namespaces/proxy-8543/pods/http:proxy-service-bvsb7-zw877:160/proxy/: foo (200; 10.88854ms) Jan 24 00:35:16.839: INFO: (16) /api/v1/namespaces/proxy-8543/pods/https:proxy-service-bvsb7-zw877:460/proxy/: tls baz (200; 11.092367ms) Jan 24 00:35:16.839: INFO: (16) /api/v1/namespaces/proxy-8543/pods/http:proxy-service-bvsb7-zw877:1080/proxy/: ... (200; 11.347378ms) Jan 24 00:35:16.847: INFO: (17) /api/v1/namespaces/proxy-8543/pods/https:proxy-service-bvsb7-zw877:462/proxy/: tls qux (200; 8.666842ms) Jan 24 00:35:16.848: INFO: (17) /api/v1/namespaces/proxy-8543/pods/http:proxy-service-bvsb7-zw877:1080/proxy/: ... (200; 9.587848ms) Jan 24 00:35:16.849: INFO: (17) /api/v1/namespaces/proxy-8543/pods/http:proxy-service-bvsb7-zw877:160/proxy/: foo (200; 9.928392ms) Jan 24 00:35:16.849: INFO: (17) /api/v1/namespaces/proxy-8543/pods/proxy-service-bvsb7-zw877:160/proxy/: foo (200; 9.936498ms) Jan 24 00:35:16.849: INFO: (17) /api/v1/namespaces/proxy-8543/pods/proxy-service-bvsb7-zw877:162/proxy/: bar (200; 10.044067ms) Jan 24 00:35:16.853: INFO: (17) /api/v1/namespaces/proxy-8543/pods/https:proxy-service-bvsb7-zw877:443/proxy/: test (200; 16.971473ms) Jan 24 00:35:16.858: INFO: (17) /api/v1/namespaces/proxy-8543/services/http:proxy-service-bvsb7:portname2/proxy/: bar (200; 19.422812ms) Jan 24 00:35:16.858: INFO: (17) /api/v1/namespaces/proxy-8543/pods/proxy-service-bvsb7-zw877:1080/proxy/: test<... (200; 19.362231ms) Jan 24 00:35:16.859: INFO: (17) /api/v1/namespaces/proxy-8543/services/https:proxy-service-bvsb7:tlsportname1/proxy/: tls baz (200; 19.615705ms) Jan 24 00:35:16.859: INFO: (17) /api/v1/namespaces/proxy-8543/services/https:proxy-service-bvsb7:tlsportname2/proxy/: tls qux (200; 20.33891ms) Jan 24 00:35:16.866: INFO: (18) /api/v1/namespaces/proxy-8543/pods/proxy-service-bvsb7-zw877:160/proxy/: foo (200; 6.489656ms) Jan 24 00:35:16.870: INFO: (18) /api/v1/namespaces/proxy-8543/pods/https:proxy-service-bvsb7-zw877:460/proxy/: tls baz (200; 10.566619ms) Jan 24 00:35:16.870: INFO: (18) /api/v1/namespaces/proxy-8543/pods/http:proxy-service-bvsb7-zw877:1080/proxy/: ... (200; 10.862711ms) Jan 24 00:35:16.872: INFO: (18) /api/v1/namespaces/proxy-8543/pods/proxy-service-bvsb7-zw877/proxy/: test (200; 12.178469ms) Jan 24 00:35:16.872: INFO: (18) /api/v1/namespaces/proxy-8543/pods/https:proxy-service-bvsb7-zw877:462/proxy/: tls qux (200; 12.788693ms) Jan 24 00:35:16.874: INFO: (18) /api/v1/namespaces/proxy-8543/pods/http:proxy-service-bvsb7-zw877:162/proxy/: bar (200; 14.513774ms) Jan 24 00:35:16.874: INFO: (18) /api/v1/namespaces/proxy-8543/pods/http:proxy-service-bvsb7-zw877:160/proxy/: foo (200; 14.47456ms) Jan 24 00:35:16.875: INFO: (18) /api/v1/namespaces/proxy-8543/pods/proxy-service-bvsb7-zw877:1080/proxy/: test<... (200; 15.221429ms) Jan 24 00:35:16.875: INFO: (18) /api/v1/namespaces/proxy-8543/pods/proxy-service-bvsb7-zw877:162/proxy/: bar (200; 15.531801ms) Jan 24 00:35:16.875: INFO: (18) /api/v1/namespaces/proxy-8543/pods/https:proxy-service-bvsb7-zw877:443/proxy/: ... (200; 3.901117ms) Jan 24 00:35:16.882: INFO: (19) /api/v1/namespaces/proxy-8543/pods/https:proxy-service-bvsb7-zw877:443/proxy/: test (200; 10.400841ms) Jan 24 00:35:16.890: INFO: (19) /api/v1/namespaces/proxy-8543/services/https:proxy-service-bvsb7:tlsportname1/proxy/: tls baz (200; 11.380836ms) Jan 24 00:35:16.890: INFO: (19) /api/v1/namespaces/proxy-8543/services/https:proxy-service-bvsb7:tlsportname2/proxy/: tls qux (200; 11.492397ms) Jan 24 00:35:16.890: INFO: (19) /api/v1/namespaces/proxy-8543/pods/https:proxy-service-bvsb7-zw877:462/proxy/: tls qux (200; 11.536964ms) Jan 24 00:35:16.890: INFO: (19) /api/v1/namespaces/proxy-8543/services/http:proxy-service-bvsb7:portname1/proxy/: foo (200; 11.679894ms) Jan 24 00:35:16.890: INFO: (19) /api/v1/namespaces/proxy-8543/pods/proxy-service-bvsb7-zw877:1080/proxy/: test<... (200; 11.677939ms) Jan 24 00:35:16.890: INFO: (19) /api/v1/namespaces/proxy-8543/services/proxy-service-bvsb7:portname2/proxy/: bar (200; 11.563871ms) Jan 24 00:35:16.890: INFO: (19) /api/v1/namespaces/proxy-8543/services/proxy-service-bvsb7:portname1/proxy/: foo (200; 11.695078ms) Jan 24 00:35:16.890: INFO: (19) /api/v1/namespaces/proxy-8543/services/http:proxy-service-bvsb7:portname2/proxy/: bar (200; 11.735742ms) Jan 24 00:35:16.890: INFO: (19) /api/v1/namespaces/proxy-8543/pods/proxy-service-bvsb7-zw877:162/proxy/: bar (200; 11.706348ms) STEP: deleting ReplicationController proxy-service-bvsb7 in namespace proxy-8543, will wait for the garbage collector to delete the pods Jan 24 00:35:16.949: INFO: Deleting ReplicationController proxy-service-bvsb7 took: 6.66725ms Jan 24 00:35:17.249: INFO: Terminating ReplicationController proxy-service-bvsb7 pods took: 300.325185ms [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:35:22.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-8543" for this suite. • [SLOW TEST:19.374 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":278,"completed":148,"skipped":2288,"failed":0} SSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:35:22.563: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating the pod Jan 24 00:35:22.658: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:35:33.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-4356" for this suite. • [SLOW TEST:11.121 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":278,"completed":149,"skipped":2295,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:35:33.685: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4365.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-4365.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4365.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-4365.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 24 00:35:45.998: INFO: DNS probes using dns-test-644e0299-6bba-44e1-981b-7ef2807cda60 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4365.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-4365.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4365.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-4365.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 24 00:36:00.200: INFO: File wheezy_udp@dns-test-service-3.dns-4365.svc.cluster.local from pod dns-4365/dns-test-077c5dcb-f594-4f15-941b-910487f4248c contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 24 00:36:00.205: INFO: File jessie_udp@dns-test-service-3.dns-4365.svc.cluster.local from pod dns-4365/dns-test-077c5dcb-f594-4f15-941b-910487f4248c contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 24 00:36:00.205: INFO: Lookups using dns-4365/dns-test-077c5dcb-f594-4f15-941b-910487f4248c failed for: [wheezy_udp@dns-test-service-3.dns-4365.svc.cluster.local jessie_udp@dns-test-service-3.dns-4365.svc.cluster.local] Jan 24 00:36:05.212: INFO: File wheezy_udp@dns-test-service-3.dns-4365.svc.cluster.local from pod dns-4365/dns-test-077c5dcb-f594-4f15-941b-910487f4248c contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 24 00:36:05.215: INFO: File jessie_udp@dns-test-service-3.dns-4365.svc.cluster.local from pod dns-4365/dns-test-077c5dcb-f594-4f15-941b-910487f4248c contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 24 00:36:05.215: INFO: Lookups using dns-4365/dns-test-077c5dcb-f594-4f15-941b-910487f4248c failed for: [wheezy_udp@dns-test-service-3.dns-4365.svc.cluster.local jessie_udp@dns-test-service-3.dns-4365.svc.cluster.local] Jan 24 00:36:10.213: INFO: File wheezy_udp@dns-test-service-3.dns-4365.svc.cluster.local from pod dns-4365/dns-test-077c5dcb-f594-4f15-941b-910487f4248c contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 24 00:36:10.218: INFO: File jessie_udp@dns-test-service-3.dns-4365.svc.cluster.local from pod dns-4365/dns-test-077c5dcb-f594-4f15-941b-910487f4248c contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 24 00:36:10.218: INFO: Lookups using dns-4365/dns-test-077c5dcb-f594-4f15-941b-910487f4248c failed for: [wheezy_udp@dns-test-service-3.dns-4365.svc.cluster.local jessie_udp@dns-test-service-3.dns-4365.svc.cluster.local] Jan 24 00:36:15.212: INFO: File wheezy_udp@dns-test-service-3.dns-4365.svc.cluster.local from pod dns-4365/dns-test-077c5dcb-f594-4f15-941b-910487f4248c contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 24 00:36:15.216: INFO: File jessie_udp@dns-test-service-3.dns-4365.svc.cluster.local from pod dns-4365/dns-test-077c5dcb-f594-4f15-941b-910487f4248c contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 24 00:36:15.216: INFO: Lookups using dns-4365/dns-test-077c5dcb-f594-4f15-941b-910487f4248c failed for: [wheezy_udp@dns-test-service-3.dns-4365.svc.cluster.local jessie_udp@dns-test-service-3.dns-4365.svc.cluster.local] Jan 24 00:36:20.224: INFO: DNS probes using dns-test-077c5dcb-f594-4f15-941b-910487f4248c succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4365.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-4365.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4365.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-4365.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 24 00:36:36.531: INFO: DNS probes using dns-test-7c3e9594-255d-4516-aef5-8595241d9dbc succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:36:36.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4365" for this suite. • [SLOW TEST:63.146 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":278,"completed":150,"skipped":2307,"failed":0} [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:36:36.830: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Jan 24 00:36:37.076: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Jan 24 00:36:37.097: INFO: Number of nodes with available pods: 0 Jan 24 00:36:37.097: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Jan 24 00:36:37.280: INFO: Number of nodes with available pods: 0 Jan 24 00:36:37.280: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 24 00:36:38.289: INFO: Number of nodes with available pods: 0 Jan 24 00:36:38.289: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 24 00:36:39.433: INFO: Number of nodes with available pods: 0 Jan 24 00:36:39.433: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 24 00:36:40.285: INFO: Number of nodes with available pods: 0 Jan 24 00:36:40.285: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 24 00:36:41.286: INFO: Number of nodes with available pods: 0 Jan 24 00:36:41.286: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 24 00:36:42.735: INFO: Number of nodes with available pods: 0 Jan 24 00:36:42.735: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 24 00:36:43.317: INFO: Number of nodes with available pods: 0 Jan 24 00:36:43.317: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 24 00:36:44.346: INFO: Number of nodes with available pods: 0 Jan 24 00:36:44.346: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 24 00:36:45.287: INFO: Number of nodes with available pods: 0 Jan 24 00:36:45.287: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 24 00:36:46.284: INFO: Number of nodes with available pods: 1 Jan 24 00:36:46.284: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Jan 24 00:36:46.324: INFO: Number of nodes with available pods: 1 Jan 24 00:36:46.325: INFO: Number of running nodes: 0, number of available pods: 1 Jan 24 00:36:47.331: INFO: Number of nodes with available pods: 0 Jan 24 00:36:47.331: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Jan 24 00:36:47.363: INFO: Number of nodes with available pods: 0 Jan 24 00:36:47.363: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 24 00:36:48.369: INFO: Number of nodes with available pods: 0 Jan 24 00:36:48.369: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 24 00:36:49.553: INFO: Number of nodes with available pods: 0 Jan 24 00:36:49.553: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 24 00:36:50.369: INFO: Number of nodes with available pods: 0 Jan 24 00:36:50.369: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 24 00:36:51.381: INFO: Number of nodes with available pods: 0 Jan 24 00:36:51.382: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 24 00:36:52.370: INFO: Number of nodes with available pods: 0 Jan 24 00:36:52.370: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 24 00:36:53.369: INFO: Number of nodes with available pods: 0 Jan 24 00:36:53.370: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 24 00:36:54.373: INFO: Number of nodes with available pods: 0 Jan 24 00:36:54.373: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 24 00:36:55.371: INFO: Number of nodes with available pods: 0 Jan 24 00:36:55.371: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 24 00:36:56.370: INFO: Number of nodes with available pods: 0 Jan 24 00:36:56.370: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 24 00:36:57.692: INFO: Number of nodes with available pods: 0 Jan 24 00:36:57.692: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 24 00:36:58.369: INFO: Number of nodes with available pods: 0 Jan 24 00:36:58.369: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 24 00:36:59.370: INFO: Number of nodes with available pods: 0 Jan 24 00:36:59.371: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 24 00:37:00.370: INFO: Number of nodes with available pods: 1 Jan 24 00:37:00.370: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6614, will wait for the garbage collector to delete the pods Jan 24 00:37:00.448: INFO: Deleting DaemonSet.extensions daemon-set took: 12.428122ms Jan 24 00:37:00.748: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.305831ms Jan 24 00:37:07.359: INFO: Number of nodes with available pods: 0 Jan 24 00:37:07.359: INFO: Number of running nodes: 0, number of available pods: 0 Jan 24 00:37:07.364: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6614/daemonsets","resourceVersion":"3919283"},"items":null} Jan 24 00:37:07.367: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6614/pods","resourceVersion":"3919283"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:37:07.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6614" for this suite. • [SLOW TEST:30.594 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":278,"completed":151,"skipped":2307,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:37:07.427: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:687 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating a service externalname-service with the type=ExternalName in namespace services-9455 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-9455 I0124 00:37:07.570217 8 runners.go:189] Created replication controller with name: externalname-service, namespace: services-9455, replica count: 2 I0124 00:37:10.621125 8 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0124 00:37:13.621780 8 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0124 00:37:16.622319 8 runners.go:189] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0124 00:37:19.622695 8 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 24 00:37:19.622: INFO: Creating new exec pod Jan 24 00:37:28.723: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9455 execpodchf75 -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Jan 24 00:37:31.085: INFO: stderr: "I0124 00:37:30.896031 3061 log.go:172] (0xc00001f290) (0xc00067bea0) Create stream\nI0124 00:37:30.896080 3061 log.go:172] (0xc00001f290) (0xc00067bea0) Stream added, broadcasting: 1\nI0124 00:37:30.899721 3061 log.go:172] (0xc00001f290) Reply frame received for 1\nI0124 00:37:30.899760 3061 log.go:172] (0xc00001f290) (0xc000612780) Create stream\nI0124 00:37:30.899779 3061 log.go:172] (0xc00001f290) (0xc000612780) Stream added, broadcasting: 3\nI0124 00:37:30.902427 3061 log.go:172] (0xc00001f290) Reply frame received for 3\nI0124 00:37:30.902453 3061 log.go:172] (0xc00001f290) (0xc000455400) Create stream\nI0124 00:37:30.902466 3061 log.go:172] (0xc00001f290) (0xc000455400) Stream added, broadcasting: 5\nI0124 00:37:30.907860 3061 log.go:172] (0xc00001f290) Reply frame received for 5\nI0124 00:37:30.996570 3061 log.go:172] (0xc00001f290) Data frame received for 5\nI0124 00:37:30.996621 3061 log.go:172] (0xc000455400) (5) Data frame handling\nI0124 00:37:30.996636 3061 log.go:172] (0xc000455400) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0124 00:37:31.004177 3061 log.go:172] (0xc00001f290) Data frame received for 5\nI0124 00:37:31.004204 3061 log.go:172] (0xc000455400) (5) Data frame handling\nI0124 00:37:31.004222 3061 log.go:172] (0xc000455400) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0124 00:37:31.077681 3061 log.go:172] (0xc00001f290) Data frame received for 1\nI0124 00:37:31.077763 3061 log.go:172] (0xc00067bea0) (1) Data frame handling\nI0124 00:37:31.077775 3061 log.go:172] (0xc00067bea0) (1) Data frame sent\nI0124 00:37:31.077794 3061 log.go:172] (0xc00001f290) (0xc00067bea0) Stream removed, broadcasting: 1\nI0124 00:37:31.078426 3061 log.go:172] (0xc00001f290) (0xc000612780) Stream removed, broadcasting: 3\nI0124 00:37:31.078469 3061 log.go:172] (0xc00001f290) (0xc000455400) Stream removed, broadcasting: 5\nI0124 00:37:31.078579 3061 log.go:172] (0xc00001f290) (0xc00067bea0) Stream removed, broadcasting: 1\nI0124 00:37:31.078621 3061 log.go:172] (0xc00001f290) (0xc000612780) Stream removed, broadcasting: 3\nI0124 00:37:31.078650 3061 log.go:172] (0xc00001f290) (0xc000455400) Stream removed, broadcasting: 5\nI0124 00:37:31.078767 3061 log.go:172] (0xc00001f290) Go away received\n" Jan 24 00:37:31.085: INFO: stdout: "" Jan 24 00:37:31.086: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9455 execpodchf75 -- /bin/sh -x -c nc -zv -t -w 2 10.96.42.91 80' Jan 24 00:37:31.462: INFO: stderr: "I0124 00:37:31.299647 3083 log.go:172] (0xc000b04dc0) (0xc000a2c3c0) Create stream\nI0124 00:37:31.299735 3083 log.go:172] (0xc000b04dc0) (0xc000a2c3c0) Stream added, broadcasting: 1\nI0124 00:37:31.312634 3083 log.go:172] (0xc000b04dc0) Reply frame received for 1\nI0124 00:37:31.312668 3083 log.go:172] (0xc000b04dc0) (0xc000656780) Create stream\nI0124 00:37:31.312677 3083 log.go:172] (0xc000b04dc0) (0xc000656780) Stream added, broadcasting: 3\nI0124 00:37:31.314288 3083 log.go:172] (0xc000b04dc0) Reply frame received for 3\nI0124 00:37:31.314347 3083 log.go:172] (0xc000b04dc0) (0xc00055b400) Create stream\nI0124 00:37:31.314361 3083 log.go:172] (0xc000b04dc0) (0xc00055b400) Stream added, broadcasting: 5\nI0124 00:37:31.316155 3083 log.go:172] (0xc000b04dc0) Reply frame received for 5\nI0124 00:37:31.390293 3083 log.go:172] (0xc000b04dc0) Data frame received for 5\nI0124 00:37:31.390361 3083 log.go:172] (0xc00055b400) (5) Data frame handling\nI0124 00:37:31.390390 3083 log.go:172] (0xc00055b400) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.42.91 80\nI0124 00:37:31.395705 3083 log.go:172] (0xc000b04dc0) Data frame received for 5\nI0124 00:37:31.395775 3083 log.go:172] (0xc00055b400) (5) Data frame handling\nI0124 00:37:31.395823 3083 log.go:172] (0xc00055b400) (5) Data frame sent\nConnection to 10.96.42.91 80 port [tcp/http] succeeded!\nI0124 00:37:31.453002 3083 log.go:172] (0xc000b04dc0) (0xc000656780) Stream removed, broadcasting: 3\nI0124 00:37:31.453188 3083 log.go:172] (0xc000b04dc0) Data frame received for 1\nI0124 00:37:31.453214 3083 log.go:172] (0xc000a2c3c0) (1) Data frame handling\nI0124 00:37:31.453256 3083 log.go:172] (0xc000a2c3c0) (1) Data frame sent\nI0124 00:37:31.453282 3083 log.go:172] (0xc000b04dc0) (0xc000a2c3c0) Stream removed, broadcasting: 1\nI0124 00:37:31.453488 3083 log.go:172] (0xc000b04dc0) (0xc00055b400) Stream removed, broadcasting: 5\nI0124 00:37:31.453541 3083 log.go:172] (0xc000b04dc0) Go away received\nI0124 00:37:31.454110 3083 log.go:172] (0xc000b04dc0) (0xc000a2c3c0) Stream removed, broadcasting: 1\nI0124 00:37:31.454127 3083 log.go:172] (0xc000b04dc0) (0xc000656780) Stream removed, broadcasting: 3\nI0124 00:37:31.454141 3083 log.go:172] (0xc000b04dc0) (0xc00055b400) Stream removed, broadcasting: 5\n" Jan 24 00:37:31.462: INFO: stdout: "" Jan 24 00:37:31.462: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:37:31.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9455" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 • [SLOW TEST:24.100 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":278,"completed":152,"skipped":2359,"failed":0} SSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:37:31.527: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jan 24 00:37:45.315: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:37:45.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1496" for this suite. • [SLOW TEST:13.865 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":153,"skipped":2362,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:37:45.393: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-9971.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-9971.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9971.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-9971.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-9971.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9971.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 24 00:37:59.698: INFO: DNS probes using dns-9971/dns-test-a0b89d13-2d44-433f-8d67-e348024b1bbb succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:37:59.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9971" for this suite. • [SLOW TEST:14.599 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":278,"completed":154,"skipped":2436,"failed":0} [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:37:59.992: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name projected-configmap-test-volume-7d8c7383-b78e-4e58-86cf-87b97e7c45ac STEP: Creating a pod to test consume configMaps Jan 24 00:38:00.280: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-dbd5c2c6-e121-42bd-8941-44bd88350ca2" in namespace "projected-7205" to be "success or failure" Jan 24 00:38:00.288: INFO: Pod "pod-projected-configmaps-dbd5c2c6-e121-42bd-8941-44bd88350ca2": Phase="Pending", Reason="", readiness=false. Elapsed: 7.673658ms Jan 24 00:38:02.293: INFO: Pod "pod-projected-configmaps-dbd5c2c6-e121-42bd-8941-44bd88350ca2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012961046s Jan 24 00:38:04.310: INFO: Pod "pod-projected-configmaps-dbd5c2c6-e121-42bd-8941-44bd88350ca2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02988273s Jan 24 00:38:06.318: INFO: Pod "pod-projected-configmaps-dbd5c2c6-e121-42bd-8941-44bd88350ca2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037703798s Jan 24 00:38:08.322: INFO: Pod "pod-projected-configmaps-dbd5c2c6-e121-42bd-8941-44bd88350ca2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.041899438s Jan 24 00:38:10.331: INFO: Pod "pod-projected-configmaps-dbd5c2c6-e121-42bd-8941-44bd88350ca2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.050484429s STEP: Saw pod success Jan 24 00:38:10.331: INFO: Pod "pod-projected-configmaps-dbd5c2c6-e121-42bd-8941-44bd88350ca2" satisfied condition "success or failure" Jan 24 00:38:10.334: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-dbd5c2c6-e121-42bd-8941-44bd88350ca2 container projected-configmap-volume-test: STEP: delete the pod Jan 24 00:38:10.397: INFO: Waiting for pod pod-projected-configmaps-dbd5c2c6-e121-42bd-8941-44bd88350ca2 to disappear Jan 24 00:38:10.402: INFO: Pod pod-projected-configmaps-dbd5c2c6-e121-42bd-8941-44bd88350ca2 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:38:10.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7205" for this suite. • [SLOW TEST:10.425 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":278,"completed":155,"skipped":2436,"failed":0} [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:38:10.418: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name projected-configmap-test-volume-map-930119d9-f1c5-42cf-9d31-6a26ef76b5b7 STEP: Creating a pod to test consume configMaps Jan 24 00:38:10.784: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1061456b-5e15-4f49-a463-9455b4c8f09a" in namespace "projected-4645" to be "success or failure" Jan 24 00:38:10.791: INFO: Pod "pod-projected-configmaps-1061456b-5e15-4f49-a463-9455b4c8f09a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.743615ms Jan 24 00:38:12.794: INFO: Pod "pod-projected-configmaps-1061456b-5e15-4f49-a463-9455b4c8f09a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0098534s Jan 24 00:38:14.812: INFO: Pod "pod-projected-configmaps-1061456b-5e15-4f49-a463-9455b4c8f09a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028122784s Jan 24 00:38:16.818: INFO: Pod "pod-projected-configmaps-1061456b-5e15-4f49-a463-9455b4c8f09a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.033445615s Jan 24 00:38:18.826: INFO: Pod "pod-projected-configmaps-1061456b-5e15-4f49-a463-9455b4c8f09a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.041650917s Jan 24 00:38:20.830: INFO: Pod "pod-projected-configmaps-1061456b-5e15-4f49-a463-9455b4c8f09a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.045525383s STEP: Saw pod success Jan 24 00:38:20.830: INFO: Pod "pod-projected-configmaps-1061456b-5e15-4f49-a463-9455b4c8f09a" satisfied condition "success or failure" Jan 24 00:38:20.832: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-1061456b-5e15-4f49-a463-9455b4c8f09a container projected-configmap-volume-test: STEP: delete the pod Jan 24 00:38:20.869: INFO: Waiting for pod pod-projected-configmaps-1061456b-5e15-4f49-a463-9455b4c8f09a to disappear Jan 24 00:38:20.874: INFO: Pod pod-projected-configmaps-1061456b-5e15-4f49-a463-9455b4c8f09a no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:38:20.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4645" for this suite. • [SLOW TEST:10.467 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":156,"skipped":2436,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:38:20.885: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name projected-configmap-test-volume-009532b6-2dbe-494c-b718-558945a97276 STEP: Creating a pod to test consume configMaps Jan 24 00:38:21.040: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b06f2ffa-b6cc-4715-a973-1d103ec65cd9" in namespace "projected-8406" to be "success or failure" Jan 24 00:38:21.046: INFO: Pod "pod-projected-configmaps-b06f2ffa-b6cc-4715-a973-1d103ec65cd9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.156634ms Jan 24 00:38:23.106: INFO: Pod "pod-projected-configmaps-b06f2ffa-b6cc-4715-a973-1d103ec65cd9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065946703s Jan 24 00:38:25.113: INFO: Pod "pod-projected-configmaps-b06f2ffa-b6cc-4715-a973-1d103ec65cd9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.073356382s Jan 24 00:38:27.118: INFO: Pod "pod-projected-configmaps-b06f2ffa-b6cc-4715-a973-1d103ec65cd9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.077793902s Jan 24 00:38:29.124: INFO: Pod "pod-projected-configmaps-b06f2ffa-b6cc-4715-a973-1d103ec65cd9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.084283618s Jan 24 00:38:31.135: INFO: Pod "pod-projected-configmaps-b06f2ffa-b6cc-4715-a973-1d103ec65cd9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.094978235s STEP: Saw pod success Jan 24 00:38:31.135: INFO: Pod "pod-projected-configmaps-b06f2ffa-b6cc-4715-a973-1d103ec65cd9" satisfied condition "success or failure" Jan 24 00:38:31.139: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-b06f2ffa-b6cc-4715-a973-1d103ec65cd9 container projected-configmap-volume-test: STEP: delete the pod Jan 24 00:38:31.196: INFO: Waiting for pod pod-projected-configmaps-b06f2ffa-b6cc-4715-a973-1d103ec65cd9 to disappear Jan 24 00:38:31.210: INFO: Pod pod-projected-configmaps-b06f2ffa-b6cc-4715-a973-1d103ec65cd9 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:38:31.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8406" for this suite. • [SLOW TEST:10.342 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":157,"skipped":2442,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:38:31.228: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jan 24 00:38:38.503: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:38:38.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2265" for this suite. • [SLOW TEST:7.393 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":278,"completed":158,"skipped":2456,"failed":0} SS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:38:38.622: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Jan 24 00:38:38.830: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-b5d1c95b-d8f8-42b9-bf14-68baddf9b99c" in namespace "security-context-test-7680" to be "success or failure" Jan 24 00:38:38.854: INFO: Pod "alpine-nnp-false-b5d1c95b-d8f8-42b9-bf14-68baddf9b99c": Phase="Pending", Reason="", readiness=false. Elapsed: 23.17995ms Jan 24 00:38:40.869: INFO: Pod "alpine-nnp-false-b5d1c95b-d8f8-42b9-bf14-68baddf9b99c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038916814s Jan 24 00:38:42.907: INFO: Pod "alpine-nnp-false-b5d1c95b-d8f8-42b9-bf14-68baddf9b99c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.076127594s Jan 24 00:38:44.914: INFO: Pod "alpine-nnp-false-b5d1c95b-d8f8-42b9-bf14-68baddf9b99c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.08379956s Jan 24 00:38:46.921: INFO: Pod "alpine-nnp-false-b5d1c95b-d8f8-42b9-bf14-68baddf9b99c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.090585125s Jan 24 00:38:46.921: INFO: Pod "alpine-nnp-false-b5d1c95b-d8f8-42b9-bf14-68baddf9b99c" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:38:46.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-7680" for this suite. • [SLOW TEST:8.321 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 when creating containers with AllowPrivilegeEscalation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:289 should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":159,"skipped":2458,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:38:46.943: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name configmap-test-upd-0757013b-5654-4385-b2b3-23bb62e2e285 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-0757013b-5654-4385-b2b3-23bb62e2e285 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:40:03.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7308" for this suite. • [SLOW TEST:76.973 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":160,"skipped":2480,"failed":0} SSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:40:03.917: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating the pod Jan 24 00:40:14.592: INFO: Successfully updated pod "labelsupdate483848e2-df02-4f7e-bed5-9eb33453054c" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:40:18.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-382" for this suite. • [SLOW TEST:14.779 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":161,"skipped":2483,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:40:18.697: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:75 Jan 24 00:40:18.805: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Registering the sample API server. Jan 24 00:40:19.381: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Jan 24 00:40:21.589: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715423219, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715423219, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715423219, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715423219, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76974b4fff\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 24 00:40:23.671: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715423219, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715423219, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715423219, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715423219, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76974b4fff\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 24 00:40:25.595: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715423219, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715423219, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715423219, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715423219, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76974b4fff\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 24 00:40:27.599: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715423219, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715423219, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715423219, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715423219, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76974b4fff\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 24 00:40:29.597: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715423219, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715423219, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715423219, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715423219, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76974b4fff\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 24 00:40:32.550: INFO: Waited 942.816912ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:66 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:40:32.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-4144" for this suite. • [SLOW TEST:14.648 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":278,"completed":162,"skipped":2494,"failed":0} SSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:40:33.345: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating secret with name projected-secret-test-edb00c5b-33aa-4981-8236-c2f56eac932b STEP: Creating a pod to test consume secrets Jan 24 00:40:33.784: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-41aab349-1641-4f16-a77e-6ab38a224184" in namespace "projected-2870" to be "success or failure" Jan 24 00:40:33.797: INFO: Pod "pod-projected-secrets-41aab349-1641-4f16-a77e-6ab38a224184": Phase="Pending", Reason="", readiness=false. Elapsed: 13.00607ms Jan 24 00:40:35.805: INFO: Pod "pod-projected-secrets-41aab349-1641-4f16-a77e-6ab38a224184": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020887759s Jan 24 00:40:37.816: INFO: Pod "pod-projected-secrets-41aab349-1641-4f16-a77e-6ab38a224184": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032129136s Jan 24 00:40:39.825: INFO: Pod "pod-projected-secrets-41aab349-1641-4f16-a77e-6ab38a224184": Phase="Pending", Reason="", readiness=false. Elapsed: 6.04107556s Jan 24 00:40:41.834: INFO: Pod "pod-projected-secrets-41aab349-1641-4f16-a77e-6ab38a224184": Phase="Pending", Reason="", readiness=false. Elapsed: 8.049578446s Jan 24 00:40:43.842: INFO: Pod "pod-projected-secrets-41aab349-1641-4f16-a77e-6ab38a224184": Phase="Pending", Reason="", readiness=false. Elapsed: 10.057981062s Jan 24 00:40:45.850: INFO: Pod "pod-projected-secrets-41aab349-1641-4f16-a77e-6ab38a224184": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.065462358s STEP: Saw pod success Jan 24 00:40:45.850: INFO: Pod "pod-projected-secrets-41aab349-1641-4f16-a77e-6ab38a224184" satisfied condition "success or failure" Jan 24 00:40:45.854: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-41aab349-1641-4f16-a77e-6ab38a224184 container secret-volume-test: STEP: delete the pod Jan 24 00:40:45.903: INFO: Waiting for pod pod-projected-secrets-41aab349-1641-4f16-a77e-6ab38a224184 to disappear Jan 24 00:40:45.980: INFO: Pod pod-projected-secrets-41aab349-1641-4f16-a77e-6ab38a224184 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:40:45.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2870" for this suite. • [SLOW TEST:12.645 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":163,"skipped":2497,"failed":0} SSSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:40:45.990: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:687 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating a service nodeport-service with the type=NodePort in namespace services-287 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-287 STEP: creating replication controller externalsvc in namespace services-287 I0124 00:40:46.293467 8 runners.go:189] Created replication controller with name: externalsvc, namespace: services-287, replica count: 2 I0124 00:40:49.343877 8 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0124 00:40:52.344118 8 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0124 00:40:55.344558 8 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0124 00:40:58.345265 8 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Jan 24 00:40:58.408: INFO: Creating new exec pod Jan 24 00:41:04.472: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-287 execpodkxxtx -- /bin/sh -x -c nslookup nodeport-service' Jan 24 00:41:04.938: INFO: stderr: "I0124 00:41:04.681910 3105 log.go:172] (0xc0008ecfd0) (0xc0008da780) Create stream\nI0124 00:41:04.682044 3105 log.go:172] (0xc0008ecfd0) (0xc0008da780) Stream added, broadcasting: 1\nI0124 00:41:04.689237 3105 log.go:172] (0xc0008ecfd0) Reply frame received for 1\nI0124 00:41:04.689270 3105 log.go:172] (0xc0008ecfd0) (0xc0005dfe00) Create stream\nI0124 00:41:04.689286 3105 log.go:172] (0xc0008ecfd0) (0xc0005dfe00) Stream added, broadcasting: 3\nI0124 00:41:04.691266 3105 log.go:172] (0xc0008ecfd0) Reply frame received for 3\nI0124 00:41:04.691370 3105 log.go:172] (0xc0008ecfd0) (0xc000532a00) Create stream\nI0124 00:41:04.691397 3105 log.go:172] (0xc0008ecfd0) (0xc000532a00) Stream added, broadcasting: 5\nI0124 00:41:04.693643 3105 log.go:172] (0xc0008ecfd0) Reply frame received for 5\nI0124 00:41:04.790798 3105 log.go:172] (0xc0008ecfd0) Data frame received for 5\nI0124 00:41:04.790858 3105 log.go:172] (0xc000532a00) (5) Data frame handling\nI0124 00:41:04.790873 3105 log.go:172] (0xc000532a00) (5) Data frame sent\n+ nslookup nodeport-service\nI0124 00:41:04.815646 3105 log.go:172] (0xc0008ecfd0) Data frame received for 3\nI0124 00:41:04.815729 3105 log.go:172] (0xc0005dfe00) (3) Data frame handling\nI0124 00:41:04.815745 3105 log.go:172] (0xc0005dfe00) (3) Data frame sent\nI0124 00:41:04.819822 3105 log.go:172] (0xc0008ecfd0) Data frame received for 3\nI0124 00:41:04.819842 3105 log.go:172] (0xc0005dfe00) (3) Data frame handling\nI0124 00:41:04.819857 3105 log.go:172] (0xc0005dfe00) (3) Data frame sent\nI0124 00:41:04.930098 3105 log.go:172] (0xc0008ecfd0) (0xc0005dfe00) Stream removed, broadcasting: 3\nI0124 00:41:04.930325 3105 log.go:172] (0xc0008ecfd0) Data frame received for 1\nI0124 00:41:04.930365 3105 log.go:172] (0xc0008da780) (1) Data frame handling\nI0124 00:41:04.930403 3105 log.go:172] (0xc0008da780) (1) Data frame sent\nI0124 00:41:04.930460 3105 log.go:172] (0xc0008ecfd0) (0xc0008da780) Stream removed, broadcasting: 1\nI0124 00:41:04.930694 3105 log.go:172] (0xc0008ecfd0) (0xc000532a00) Stream removed, broadcasting: 5\nI0124 00:41:04.930766 3105 log.go:172] (0xc0008ecfd0) Go away received\nI0124 00:41:04.930947 3105 log.go:172] (0xc0008ecfd0) (0xc0008da780) Stream removed, broadcasting: 1\nI0124 00:41:04.930961 3105 log.go:172] (0xc0008ecfd0) (0xc0005dfe00) Stream removed, broadcasting: 3\nI0124 00:41:04.930967 3105 log.go:172] (0xc0008ecfd0) (0xc000532a00) Stream removed, broadcasting: 5\n" Jan 24 00:41:04.939: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-287.svc.cluster.local\tcanonical name = externalsvc.services-287.svc.cluster.local.\nName:\texternalsvc.services-287.svc.cluster.local\nAddress: 10.96.119.148\n\n" STEP: deleting ReplicationController externalsvc in namespace services-287, will wait for the garbage collector to delete the pods Jan 24 00:41:04.997: INFO: Deleting ReplicationController externalsvc took: 4.583219ms Jan 24 00:41:05.298: INFO: Terminating ReplicationController externalsvc pods took: 300.279421ms Jan 24 00:41:23.129: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:41:23.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-287" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 • [SLOW TEST:37.252 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":278,"completed":164,"skipped":2501,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:41:23.243: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:687 [It] should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating service nodeport-test with type=NodePort in namespace services-5952 STEP: creating replication controller nodeport-test in namespace services-5952 I0124 00:41:23.400673 8 runners.go:189] Created replication controller with name: nodeport-test, namespace: services-5952, replica count: 2 I0124 00:41:26.451318 8 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0124 00:41:29.451776 8 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0124 00:41:32.452056 8 runners.go:189] nodeport-test Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0124 00:41:35.452436 8 runners.go:189] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 24 00:41:35.452: INFO: Creating new exec pod Jan 24 00:41:44.503: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-5952 execpod7kdrj -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' Jan 24 00:41:44.801: INFO: stderr: "I0124 00:41:44.655898 3123 log.go:172] (0xc00091cd10) (0xc0008f2460) Create stream\nI0124 00:41:44.655977 3123 log.go:172] (0xc00091cd10) (0xc0008f2460) Stream added, broadcasting: 1\nI0124 00:41:44.660478 3123 log.go:172] (0xc00091cd10) Reply frame received for 1\nI0124 00:41:44.660502 3123 log.go:172] (0xc00091cd10) (0xc000679cc0) Create stream\nI0124 00:41:44.660508 3123 log.go:172] (0xc00091cd10) (0xc000679cc0) Stream added, broadcasting: 3\nI0124 00:41:44.662042 3123 log.go:172] (0xc00091cd10) Reply frame received for 3\nI0124 00:41:44.662069 3123 log.go:172] (0xc00091cd10) (0xc00060a8c0) Create stream\nI0124 00:41:44.662085 3123 log.go:172] (0xc00091cd10) (0xc00060a8c0) Stream added, broadcasting: 5\nI0124 00:41:44.663440 3123 log.go:172] (0xc00091cd10) Reply frame received for 5\nI0124 00:41:44.726986 3123 log.go:172] (0xc00091cd10) Data frame received for 5\nI0124 00:41:44.727056 3123 log.go:172] (0xc00060a8c0) (5) Data frame handling\nI0124 00:41:44.727069 3123 log.go:172] (0xc00060a8c0) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0124 00:41:44.733833 3123 log.go:172] (0xc00091cd10) Data frame received for 5\nI0124 00:41:44.733875 3123 log.go:172] (0xc00060a8c0) (5) Data frame handling\nI0124 00:41:44.733890 3123 log.go:172] (0xc00060a8c0) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0124 00:41:44.795032 3123 log.go:172] (0xc00091cd10) (0xc000679cc0) Stream removed, broadcasting: 3\nI0124 00:41:44.795188 3123 log.go:172] (0xc00091cd10) Data frame received for 1\nI0124 00:41:44.795216 3123 log.go:172] (0xc0008f2460) (1) Data frame handling\nI0124 00:41:44.795241 3123 log.go:172] (0xc0008f2460) (1) Data frame sent\nI0124 00:41:44.795293 3123 log.go:172] (0xc00091cd10) (0xc00060a8c0) Stream removed, broadcasting: 5\nI0124 00:41:44.795337 3123 log.go:172] (0xc00091cd10) (0xc0008f2460) Stream removed, broadcasting: 1\nI0124 00:41:44.795386 3123 log.go:172] (0xc00091cd10) Go away received\nI0124 00:41:44.795764 3123 log.go:172] (0xc00091cd10) (0xc0008f2460) Stream removed, broadcasting: 1\nI0124 00:41:44.795781 3123 log.go:172] (0xc00091cd10) (0xc000679cc0) Stream removed, broadcasting: 3\nI0124 00:41:44.795790 3123 log.go:172] (0xc00091cd10) (0xc00060a8c0) Stream removed, broadcasting: 5\n" Jan 24 00:41:44.801: INFO: stdout: "" Jan 24 00:41:44.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-5952 execpod7kdrj -- /bin/sh -x -c nc -zv -t -w 2 10.96.144.7 80' Jan 24 00:41:45.143: INFO: stderr: "I0124 00:41:45.009369 3144 log.go:172] (0xc000a1c580) (0xc0005b1a40) Create stream\nI0124 00:41:45.009560 3144 log.go:172] (0xc000a1c580) (0xc0005b1a40) Stream added, broadcasting: 1\nI0124 00:41:45.012369 3144 log.go:172] (0xc000a1c580) Reply frame received for 1\nI0124 00:41:45.012412 3144 log.go:172] (0xc000a1c580) (0xc00094a000) Create stream\nI0124 00:41:45.012425 3144 log.go:172] (0xc000a1c580) (0xc00094a000) Stream added, broadcasting: 3\nI0124 00:41:45.013364 3144 log.go:172] (0xc000a1c580) Reply frame received for 3\nI0124 00:41:45.013397 3144 log.go:172] (0xc000a1c580) (0xc000022000) Create stream\nI0124 00:41:45.013405 3144 log.go:172] (0xc000a1c580) (0xc000022000) Stream added, broadcasting: 5\nI0124 00:41:45.014534 3144 log.go:172] (0xc000a1c580) Reply frame received for 5\nI0124 00:41:45.076255 3144 log.go:172] (0xc000a1c580) Data frame received for 5\nI0124 00:41:45.076302 3144 log.go:172] (0xc000022000) (5) Data frame handling\nI0124 00:41:45.076326 3144 log.go:172] (0xc000022000) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.144.7 80\nI0124 00:41:45.076562 3144 log.go:172] (0xc000a1c580) Data frame received for 5\nI0124 00:41:45.076579 3144 log.go:172] (0xc000022000) (5) Data frame handling\nI0124 00:41:45.076593 3144 log.go:172] (0xc000022000) (5) Data frame sent\nConnection to 10.96.144.7 80 port [tcp/http] succeeded!\nI0124 00:41:45.135761 3144 log.go:172] (0xc000a1c580) Data frame received for 1\nI0124 00:41:45.135806 3144 log.go:172] (0xc0005b1a40) (1) Data frame handling\nI0124 00:41:45.135834 3144 log.go:172] (0xc0005b1a40) (1) Data frame sent\nI0124 00:41:45.135922 3144 log.go:172] (0xc000a1c580) (0xc0005b1a40) Stream removed, broadcasting: 1\nI0124 00:41:45.136024 3144 log.go:172] (0xc000a1c580) (0xc00094a000) Stream removed, broadcasting: 3\nI0124 00:41:45.136190 3144 log.go:172] (0xc000a1c580) (0xc000022000) Stream removed, broadcasting: 5\nI0124 00:41:45.136255 3144 log.go:172] (0xc000a1c580) Go away received\nI0124 00:41:45.136868 3144 log.go:172] (0xc000a1c580) (0xc0005b1a40) Stream removed, broadcasting: 1\nI0124 00:41:45.136884 3144 log.go:172] (0xc000a1c580) (0xc00094a000) Stream removed, broadcasting: 3\nI0124 00:41:45.136893 3144 log.go:172] (0xc000a1c580) (0xc000022000) Stream removed, broadcasting: 5\n" Jan 24 00:41:45.143: INFO: stdout: "" Jan 24 00:41:45.143: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-5952 execpod7kdrj -- /bin/sh -x -c nc -zv -t -w 2 10.96.2.250 32570' Jan 24 00:41:45.478: INFO: stderr: "I0124 00:41:45.320011 3166 log.go:172] (0xc000b91130) (0xc000a4e0a0) Create stream\nI0124 00:41:45.320379 3166 log.go:172] (0xc000b91130) (0xc000a4e0a0) Stream added, broadcasting: 1\nI0124 00:41:45.327857 3166 log.go:172] (0xc000b91130) Reply frame received for 1\nI0124 00:41:45.328005 3166 log.go:172] (0xc000b91130) (0xc000bac280) Create stream\nI0124 00:41:45.328022 3166 log.go:172] (0xc000b91130) (0xc000bac280) Stream added, broadcasting: 3\nI0124 00:41:45.331458 3166 log.go:172] (0xc000b91130) Reply frame received for 3\nI0124 00:41:45.331524 3166 log.go:172] (0xc000b91130) (0xc000a021e0) Create stream\nI0124 00:41:45.331572 3166 log.go:172] (0xc000b91130) (0xc000a021e0) Stream added, broadcasting: 5\nI0124 00:41:45.336692 3166 log.go:172] (0xc000b91130) Reply frame received for 5\nI0124 00:41:45.405164 3166 log.go:172] (0xc000b91130) Data frame received for 5\nI0124 00:41:45.405223 3166 log.go:172] (0xc000a021e0) (5) Data frame handling\nI0124 00:41:45.405243 3166 log.go:172] (0xc000a021e0) (5) Data frame sent\nI0124 00:41:45.405254 3166 log.go:172] (0xc000b91130) Data frame received for 5\nI0124 00:41:45.405262 3166 log.go:172] (0xc000a021e0) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.2.250 32570\nConnection to 10.96.2.250 32570 port [tcp/32570] succeeded!\nI0124 00:41:45.405310 3166 log.go:172] (0xc000a021e0) (5) Data frame sent\nI0124 00:41:45.471003 3166 log.go:172] (0xc000b91130) (0xc000bac280) Stream removed, broadcasting: 3\nI0124 00:41:45.471078 3166 log.go:172] (0xc000b91130) Data frame received for 1\nI0124 00:41:45.471141 3166 log.go:172] (0xc000b91130) (0xc000a021e0) Stream removed, broadcasting: 5\nI0124 00:41:45.471205 3166 log.go:172] (0xc000a4e0a0) (1) Data frame handling\nI0124 00:41:45.471262 3166 log.go:172] (0xc000a4e0a0) (1) Data frame sent\nI0124 00:41:45.471353 3166 log.go:172] (0xc000b91130) (0xc000a4e0a0) Stream removed, broadcasting: 1\nI0124 00:41:45.471402 3166 log.go:172] (0xc000b91130) Go away received\nI0124 00:41:45.471952 3166 log.go:172] (0xc000b91130) (0xc000a4e0a0) Stream removed, broadcasting: 1\nI0124 00:41:45.471974 3166 log.go:172] (0xc000b91130) (0xc000bac280) Stream removed, broadcasting: 3\nI0124 00:41:45.471985 3166 log.go:172] (0xc000b91130) (0xc000a021e0) Stream removed, broadcasting: 5\n" Jan 24 00:41:45.479: INFO: stdout: "" Jan 24 00:41:45.479: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-5952 execpod7kdrj -- /bin/sh -x -c nc -zv -t -w 2 10.96.1.234 32570' Jan 24 00:41:45.984: INFO: stderr: "I0124 00:41:45.680330 3186 log.go:172] (0xc000a1d130) (0xc0009945a0) Create stream\nI0124 00:41:45.680499 3186 log.go:172] (0xc000a1d130) (0xc0009945a0) Stream added, broadcasting: 1\nI0124 00:41:45.695190 3186 log.go:172] (0xc000a1d130) Reply frame received for 1\nI0124 00:41:45.695250 3186 log.go:172] (0xc000a1d130) (0xc00064c6e0) Create stream\nI0124 00:41:45.695258 3186 log.go:172] (0xc000a1d130) (0xc00064c6e0) Stream added, broadcasting: 3\nI0124 00:41:45.697338 3186 log.go:172] (0xc000a1d130) Reply frame received for 3\nI0124 00:41:45.697363 3186 log.go:172] (0xc000a1d130) (0xc00075f360) Create stream\nI0124 00:41:45.697371 3186 log.go:172] (0xc000a1d130) (0xc00075f360) Stream added, broadcasting: 5\nI0124 00:41:45.699006 3186 log.go:172] (0xc000a1d130) Reply frame received for 5\nI0124 00:41:45.818248 3186 log.go:172] (0xc000a1d130) Data frame received for 5\nI0124 00:41:45.818310 3186 log.go:172] (0xc00075f360) (5) Data frame handling\nI0124 00:41:45.818361 3186 log.go:172] (0xc00075f360) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.1.234 32570\nI0124 00:41:45.832118 3186 log.go:172] (0xc000a1d130) Data frame received for 5\nI0124 00:41:45.832136 3186 log.go:172] (0xc00075f360) (5) Data frame handling\nI0124 00:41:45.832155 3186 log.go:172] (0xc00075f360) (5) Data frame sent\nConnection to 10.96.1.234 32570 port [tcp/32570] succeeded!\nI0124 00:41:45.968295 3186 log.go:172] (0xc000a1d130) Data frame received for 1\nI0124 00:41:45.968410 3186 log.go:172] (0xc000a1d130) (0xc00064c6e0) Stream removed, broadcasting: 3\nI0124 00:41:45.968490 3186 log.go:172] (0xc0009945a0) (1) Data frame handling\nI0124 00:41:45.968531 3186 log.go:172] (0xc0009945a0) (1) Data frame sent\nI0124 00:41:45.968549 3186 log.go:172] (0xc000a1d130) (0xc0009945a0) Stream removed, broadcasting: 1\nI0124 00:41:45.969106 3186 log.go:172] (0xc000a1d130) (0xc00075f360) Stream removed, broadcasting: 5\nI0124 00:41:45.969757 3186 log.go:172] (0xc000a1d130) (0xc0009945a0) Stream removed, broadcasting: 1\nI0124 00:41:45.969771 3186 log.go:172] (0xc000a1d130) (0xc00064c6e0) Stream removed, broadcasting: 3\nI0124 00:41:45.969781 3186 log.go:172] (0xc000a1d130) (0xc00075f360) Stream removed, broadcasting: 5\nI0124 00:41:45.970011 3186 log.go:172] (0xc000a1d130) Go away received\n" Jan 24 00:41:45.984: INFO: stdout: "" [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:41:45.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5952" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 • [SLOW TEST:22.777 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":278,"completed":165,"skipped":2560,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:41:46.020: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [AfterEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:41:46.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-6170" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":278,"completed":166,"skipped":2570,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:41:46.217: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test override command Jan 24 00:41:46.336: INFO: Waiting up to 5m0s for pod "client-containers-556c1af8-6fae-4b5f-b728-0ceb9ae0a9cc" in namespace "containers-9558" to be "success or failure" Jan 24 00:41:46.343: INFO: Pod "client-containers-556c1af8-6fae-4b5f-b728-0ceb9ae0a9cc": Phase="Pending", Reason="", readiness=false. Elapsed: 7.544907ms Jan 24 00:41:48.349: INFO: Pod "client-containers-556c1af8-6fae-4b5f-b728-0ceb9ae0a9cc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013436897s Jan 24 00:41:50.355: INFO: Pod "client-containers-556c1af8-6fae-4b5f-b728-0ceb9ae0a9cc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.018972564s Jan 24 00:41:52.380: INFO: Pod "client-containers-556c1af8-6fae-4b5f-b728-0ceb9ae0a9cc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.043825205s Jan 24 00:41:54.813: INFO: Pod "client-containers-556c1af8-6fae-4b5f-b728-0ceb9ae0a9cc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.477638609s Jan 24 00:41:56.818: INFO: Pod "client-containers-556c1af8-6fae-4b5f-b728-0ceb9ae0a9cc": Phase="Pending", Reason="", readiness=false. Elapsed: 10.481912589s Jan 24 00:41:58.861: INFO: Pod "client-containers-556c1af8-6fae-4b5f-b728-0ceb9ae0a9cc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.524914087s STEP: Saw pod success Jan 24 00:41:58.861: INFO: Pod "client-containers-556c1af8-6fae-4b5f-b728-0ceb9ae0a9cc" satisfied condition "success or failure" Jan 24 00:41:58.865: INFO: Trying to get logs from node jerma-node pod client-containers-556c1af8-6fae-4b5f-b728-0ceb9ae0a9cc container test-container: STEP: delete the pod Jan 24 00:41:58.904: INFO: Waiting for pod client-containers-556c1af8-6fae-4b5f-b728-0ceb9ae0a9cc to disappear Jan 24 00:41:58.921: INFO: Pod client-containers-556c1af8-6fae-4b5f-b728-0ceb9ae0a9cc no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:41:58.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-9558" for this suite. • [SLOW TEST:12.727 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":278,"completed":167,"skipped":2578,"failed":0} SSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:41:58.944: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0124 00:42:10.508185 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 24 00:42:10.508: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:42:10.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9800" for this suite. • [SLOW TEST:12.362 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":278,"completed":168,"skipped":2582,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:42:11.308: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test emptydir 0777 on tmpfs Jan 24 00:42:16.370: INFO: Waiting up to 5m0s for pod "pod-6a794d46-556d-4e2d-ab04-60b293b37af6" in namespace "emptydir-7294" to be "success or failure" Jan 24 00:42:17.306: INFO: Pod "pod-6a794d46-556d-4e2d-ab04-60b293b37af6": Phase="Pending", Reason="", readiness=false. Elapsed: 935.309042ms Jan 24 00:42:19.324: INFO: Pod "pod-6a794d46-556d-4e2d-ab04-60b293b37af6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.953875624s Jan 24 00:42:22.708: INFO: Pod "pod-6a794d46-556d-4e2d-ab04-60b293b37af6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.337498394s Jan 24 00:42:25.204: INFO: Pod "pod-6a794d46-556d-4e2d-ab04-60b293b37af6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.833671831s Jan 24 00:42:27.878: INFO: Pod "pod-6a794d46-556d-4e2d-ab04-60b293b37af6": Phase="Pending", Reason="", readiness=false. Elapsed: 11.50736661s Jan 24 00:42:29.883: INFO: Pod "pod-6a794d46-556d-4e2d-ab04-60b293b37af6": Phase="Pending", Reason="", readiness=false. Elapsed: 13.513114742s Jan 24 00:42:31.904: INFO: Pod "pod-6a794d46-556d-4e2d-ab04-60b293b37af6": Phase="Pending", Reason="", readiness=false. Elapsed: 15.534010539s Jan 24 00:42:33.917: INFO: Pod "pod-6a794d46-556d-4e2d-ab04-60b293b37af6": Phase="Pending", Reason="", readiness=false. Elapsed: 17.546622982s Jan 24 00:42:35.926: INFO: Pod "pod-6a794d46-556d-4e2d-ab04-60b293b37af6": Phase="Pending", Reason="", readiness=false. Elapsed: 19.555666802s Jan 24 00:42:37.933: INFO: Pod "pod-6a794d46-556d-4e2d-ab04-60b293b37af6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 21.562248983s STEP: Saw pod success Jan 24 00:42:37.933: INFO: Pod "pod-6a794d46-556d-4e2d-ab04-60b293b37af6" satisfied condition "success or failure" Jan 24 00:42:37.937: INFO: Trying to get logs from node jerma-node pod pod-6a794d46-556d-4e2d-ab04-60b293b37af6 container test-container: STEP: delete the pod Jan 24 00:42:38.039: INFO: Waiting for pod pod-6a794d46-556d-4e2d-ab04-60b293b37af6 to disappear Jan 24 00:42:38.049: INFO: Pod pod-6a794d46-556d-4e2d-ab04-60b293b37af6 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:42:38.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7294" for this suite. • [SLOW TEST:26.750 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":169,"skipped":2607,"failed":0} SSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:42:38.060: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-2022 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a new StatefulSet Jan 24 00:42:38.257: INFO: Found 0 stateful pods, waiting for 3 Jan 24 00:42:48.320: INFO: Found 2 stateful pods, waiting for 3 Jan 24 00:42:58.266: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 24 00:42:58.266: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 24 00:42:58.266: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jan 24 00:43:08.268: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 24 00:43:08.268: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 24 00:43:08.268: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Jan 24 00:43:08.285: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2022 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 24 00:43:08.736: INFO: stderr: "I0124 00:43:08.498967 3206 log.go:172] (0xc00096c2c0) (0xc0007abea0) Create stream\nI0124 00:43:08.499289 3206 log.go:172] (0xc00096c2c0) (0xc0007abea0) Stream added, broadcasting: 1\nI0124 00:43:08.505264 3206 log.go:172] (0xc00096c2c0) Reply frame received for 1\nI0124 00:43:08.505381 3206 log.go:172] (0xc00096c2c0) (0xc0008ae000) Create stream\nI0124 00:43:08.505392 3206 log.go:172] (0xc00096c2c0) (0xc0008ae000) Stream added, broadcasting: 3\nI0124 00:43:08.509852 3206 log.go:172] (0xc00096c2c0) Reply frame received for 3\nI0124 00:43:08.510031 3206 log.go:172] (0xc00096c2c0) (0xc0008ae320) Create stream\nI0124 00:43:08.510126 3206 log.go:172] (0xc00096c2c0) (0xc0008ae320) Stream added, broadcasting: 5\nI0124 00:43:08.514079 3206 log.go:172] (0xc00096c2c0) Reply frame received for 5\nI0124 00:43:08.616798 3206 log.go:172] (0xc00096c2c0) Data frame received for 5\nI0124 00:43:08.616876 3206 log.go:172] (0xc0008ae320) (5) Data frame handling\nI0124 00:43:08.616901 3206 log.go:172] (0xc0008ae320) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0124 00:43:08.663045 3206 log.go:172] (0xc00096c2c0) Data frame received for 3\nI0124 00:43:08.663087 3206 log.go:172] (0xc0008ae000) (3) Data frame handling\nI0124 00:43:08.663097 3206 log.go:172] (0xc0008ae000) (3) Data frame sent\nI0124 00:43:08.730354 3206 log.go:172] (0xc00096c2c0) (0xc0008ae000) Stream removed, broadcasting: 3\nI0124 00:43:08.730464 3206 log.go:172] (0xc00096c2c0) Data frame received for 1\nI0124 00:43:08.730482 3206 log.go:172] (0xc0007abea0) (1) Data frame handling\nI0124 00:43:08.730495 3206 log.go:172] (0xc0007abea0) (1) Data frame sent\nI0124 00:43:08.730505 3206 log.go:172] (0xc00096c2c0) (0xc0007abea0) Stream removed, broadcasting: 1\nI0124 00:43:08.731251 3206 log.go:172] (0xc00096c2c0) (0xc0008ae320) Stream removed, broadcasting: 5\nI0124 00:43:08.731281 3206 log.go:172] (0xc00096c2c0) (0xc0007abea0) Stream removed, broadcasting: 1\nI0124 00:43:08.731295 3206 log.go:172] (0xc00096c2c0) (0xc0008ae000) Stream removed, broadcasting: 3\nI0124 00:43:08.731308 3206 log.go:172] (0xc00096c2c0) (0xc0008ae320) Stream removed, broadcasting: 5\n" Jan 24 00:43:08.736: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 24 00:43:08.736: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Jan 24 00:43:18.783: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Jan 24 00:43:28.861: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2022 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 24 00:43:29.289: INFO: stderr: "I0124 00:43:29.048774 3227 log.go:172] (0xc000115550) (0xc0008e00a0) Create stream\nI0124 00:43:29.049051 3227 log.go:172] (0xc000115550) (0xc0008e00a0) Stream added, broadcasting: 1\nI0124 00:43:29.054756 3227 log.go:172] (0xc000115550) Reply frame received for 1\nI0124 00:43:29.054832 3227 log.go:172] (0xc000115550) (0xc0008e0140) Create stream\nI0124 00:43:29.054845 3227 log.go:172] (0xc000115550) (0xc0008e0140) Stream added, broadcasting: 3\nI0124 00:43:29.057595 3227 log.go:172] (0xc000115550) Reply frame received for 3\nI0124 00:43:29.057628 3227 log.go:172] (0xc000115550) (0xc000a0a000) Create stream\nI0124 00:43:29.057646 3227 log.go:172] (0xc000115550) (0xc000a0a000) Stream added, broadcasting: 5\nI0124 00:43:29.058970 3227 log.go:172] (0xc000115550) Reply frame received for 5\nI0124 00:43:29.143416 3227 log.go:172] (0xc000115550) Data frame received for 5\nI0124 00:43:29.143463 3227 log.go:172] (0xc000a0a000) (5) Data frame handling\nI0124 00:43:29.143483 3227 log.go:172] (0xc000a0a000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0124 00:43:29.143524 3227 log.go:172] (0xc000115550) Data frame received for 3\nI0124 00:43:29.143568 3227 log.go:172] (0xc0008e0140) (3) Data frame handling\nI0124 00:43:29.143598 3227 log.go:172] (0xc0008e0140) (3) Data frame sent\nI0124 00:43:29.283743 3227 log.go:172] (0xc000115550) Data frame received for 1\nI0124 00:43:29.283784 3227 log.go:172] (0xc0008e00a0) (1) Data frame handling\nI0124 00:43:29.283796 3227 log.go:172] (0xc0008e00a0) (1) Data frame sent\nI0124 00:43:29.283815 3227 log.go:172] (0xc000115550) (0xc0008e00a0) Stream removed, broadcasting: 1\nI0124 00:43:29.284450 3227 log.go:172] (0xc000115550) (0xc0008e0140) Stream removed, broadcasting: 3\nI0124 00:43:29.284473 3227 log.go:172] (0xc000115550) (0xc000a0a000) Stream removed, broadcasting: 5\nI0124 00:43:29.284513 3227 log.go:172] (0xc000115550) (0xc0008e00a0) Stream removed, broadcasting: 1\nI0124 00:43:29.284525 3227 log.go:172] (0xc000115550) (0xc0008e0140) Stream removed, broadcasting: 3\nI0124 00:43:29.284533 3227 log.go:172] (0xc000115550) (0xc000a0a000) Stream removed, broadcasting: 5\n" Jan 24 00:43:29.289: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 24 00:43:29.289: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 24 00:43:29.402: INFO: Waiting for StatefulSet statefulset-2022/ss2 to complete update Jan 24 00:43:29.402: INFO: Waiting for Pod statefulset-2022/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 24 00:43:29.402: INFO: Waiting for Pod statefulset-2022/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 24 00:43:29.402: INFO: Waiting for Pod statefulset-2022/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 24 00:43:39.415: INFO: Waiting for StatefulSet statefulset-2022/ss2 to complete update Jan 24 00:43:39.415: INFO: Waiting for Pod statefulset-2022/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 24 00:43:39.415: INFO: Waiting for Pod statefulset-2022/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 24 00:43:49.415: INFO: Waiting for StatefulSet statefulset-2022/ss2 to complete update Jan 24 00:43:49.415: INFO: Waiting for Pod statefulset-2022/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 24 00:43:49.415: INFO: Waiting for Pod statefulset-2022/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 24 00:43:59.419: INFO: Waiting for StatefulSet statefulset-2022/ss2 to complete update Jan 24 00:43:59.420: INFO: Waiting for Pod statefulset-2022/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 24 00:44:09.415: INFO: Waiting for StatefulSet statefulset-2022/ss2 to complete update Jan 24 00:44:09.416: INFO: Waiting for Pod statefulset-2022/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Rolling back to a previous revision Jan 24 00:44:19.428: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2022 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 24 00:44:19.867: INFO: stderr: "I0124 00:44:19.641865 3248 log.go:172] (0xc000a9cf20) (0xc000a643c0) Create stream\nI0124 00:44:19.641978 3248 log.go:172] (0xc000a9cf20) (0xc000a643c0) Stream added, broadcasting: 1\nI0124 00:44:19.646380 3248 log.go:172] (0xc000a9cf20) Reply frame received for 1\nI0124 00:44:19.646432 3248 log.go:172] (0xc000a9cf20) (0xc000a82140) Create stream\nI0124 00:44:19.646447 3248 log.go:172] (0xc000a9cf20) (0xc000a82140) Stream added, broadcasting: 3\nI0124 00:44:19.649447 3248 log.go:172] (0xc000a9cf20) Reply frame received for 3\nI0124 00:44:19.649497 3248 log.go:172] (0xc000a9cf20) (0xc0009d2280) Create stream\nI0124 00:44:19.649509 3248 log.go:172] (0xc000a9cf20) (0xc0009d2280) Stream added, broadcasting: 5\nI0124 00:44:19.651667 3248 log.go:172] (0xc000a9cf20) Reply frame received for 5\nI0124 00:44:19.753582 3248 log.go:172] (0xc000a9cf20) Data frame received for 5\nI0124 00:44:19.753655 3248 log.go:172] (0xc0009d2280) (5) Data frame handling\nI0124 00:44:19.753679 3248 log.go:172] (0xc0009d2280) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0124 00:44:19.790107 3248 log.go:172] (0xc000a9cf20) Data frame received for 3\nI0124 00:44:19.790130 3248 log.go:172] (0xc000a82140) (3) Data frame handling\nI0124 00:44:19.790144 3248 log.go:172] (0xc000a82140) (3) Data frame sent\nI0124 00:44:19.858276 3248 log.go:172] (0xc000a9cf20) Data frame received for 1\nI0124 00:44:19.858306 3248 log.go:172] (0xc000a643c0) (1) Data frame handling\nI0124 00:44:19.858336 3248 log.go:172] (0xc000a643c0) (1) Data frame sent\nI0124 00:44:19.858373 3248 log.go:172] (0xc000a9cf20) (0xc000a643c0) Stream removed, broadcasting: 1\nI0124 00:44:19.858464 3248 log.go:172] (0xc000a9cf20) (0xc000a82140) Stream removed, broadcasting: 3\nI0124 00:44:19.858503 3248 log.go:172] (0xc000a9cf20) (0xc0009d2280) Stream removed, broadcasting: 5\nI0124 00:44:19.858518 3248 log.go:172] (0xc000a9cf20) Go away received\nI0124 00:44:19.859250 3248 log.go:172] (0xc000a9cf20) (0xc000a643c0) Stream removed, broadcasting: 1\nI0124 00:44:19.859284 3248 log.go:172] (0xc000a9cf20) (0xc000a82140) Stream removed, broadcasting: 3\nI0124 00:44:19.859301 3248 log.go:172] (0xc000a9cf20) (0xc0009d2280) Stream removed, broadcasting: 5\n" Jan 24 00:44:19.867: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 24 00:44:19.867: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 24 00:44:29.968: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Jan 24 00:44:40.021: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2022 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 24 00:44:40.404: INFO: stderr: "I0124 00:44:40.229190 3269 log.go:172] (0xc00011e6e0) (0xc000739540) Create stream\nI0124 00:44:40.229475 3269 log.go:172] (0xc00011e6e0) (0xc000739540) Stream added, broadcasting: 1\nI0124 00:44:40.234484 3269 log.go:172] (0xc00011e6e0) Reply frame received for 1\nI0124 00:44:40.234640 3269 log.go:172] (0xc00011e6e0) (0xc00092c000) Create stream\nI0124 00:44:40.234686 3269 log.go:172] (0xc00011e6e0) (0xc00092c000) Stream added, broadcasting: 3\nI0124 00:44:40.236343 3269 log.go:172] (0xc00011e6e0) Reply frame received for 3\nI0124 00:44:40.236483 3269 log.go:172] (0xc00011e6e0) (0xc00092c0a0) Create stream\nI0124 00:44:40.236548 3269 log.go:172] (0xc00011e6e0) (0xc00092c0a0) Stream added, broadcasting: 5\nI0124 00:44:40.237758 3269 log.go:172] (0xc00011e6e0) Reply frame received for 5\nI0124 00:44:40.312158 3269 log.go:172] (0xc00011e6e0) Data frame received for 5\nI0124 00:44:40.312222 3269 log.go:172] (0xc00092c0a0) (5) Data frame handling\nI0124 00:44:40.312254 3269 log.go:172] (0xc00092c0a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0124 00:44:40.312492 3269 log.go:172] (0xc00011e6e0) Data frame received for 3\nI0124 00:44:40.312509 3269 log.go:172] (0xc00092c000) (3) Data frame handling\nI0124 00:44:40.312528 3269 log.go:172] (0xc00092c000) (3) Data frame sent\nI0124 00:44:40.397571 3269 log.go:172] (0xc00011e6e0) (0xc00092c000) Stream removed, broadcasting: 3\nI0124 00:44:40.397696 3269 log.go:172] (0xc00011e6e0) Data frame received for 1\nI0124 00:44:40.397712 3269 log.go:172] (0xc000739540) (1) Data frame handling\nI0124 00:44:40.397723 3269 log.go:172] (0xc000739540) (1) Data frame sent\nI0124 00:44:40.397734 3269 log.go:172] (0xc00011e6e0) (0xc000739540) Stream removed, broadcasting: 1\nI0124 00:44:40.397864 3269 log.go:172] (0xc00011e6e0) (0xc00092c0a0) Stream removed, broadcasting: 5\nI0124 00:44:40.397989 3269 log.go:172] (0xc00011e6e0) Go away received\nI0124 00:44:40.398167 3269 log.go:172] (0xc00011e6e0) (0xc000739540) Stream removed, broadcasting: 1\nI0124 00:44:40.398181 3269 log.go:172] (0xc00011e6e0) (0xc00092c000) Stream removed, broadcasting: 3\nI0124 00:44:40.398187 3269 log.go:172] (0xc00011e6e0) (0xc00092c0a0) Stream removed, broadcasting: 5\n" Jan 24 00:44:40.404: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 24 00:44:40.404: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 24 00:44:50.434: INFO: Waiting for StatefulSet statefulset-2022/ss2 to complete update Jan 24 00:44:50.434: INFO: Waiting for Pod statefulset-2022/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jan 24 00:44:50.434: INFO: Waiting for Pod statefulset-2022/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jan 24 00:44:50.434: INFO: Waiting for Pod statefulset-2022/ss2-2 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jan 24 00:45:00.449: INFO: Waiting for StatefulSet statefulset-2022/ss2 to complete update Jan 24 00:45:00.450: INFO: Waiting for Pod statefulset-2022/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jan 24 00:45:00.450: INFO: Waiting for Pod statefulset-2022/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jan 24 00:45:10.447: INFO: Waiting for StatefulSet statefulset-2022/ss2 to complete update Jan 24 00:45:10.447: INFO: Waiting for Pod statefulset-2022/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jan 24 00:45:10.447: INFO: Waiting for Pod statefulset-2022/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jan 24 00:45:20.446: INFO: Waiting for StatefulSet statefulset-2022/ss2 to complete update Jan 24 00:45:20.446: INFO: Waiting for Pod statefulset-2022/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jan 24 00:45:30.448: INFO: Waiting for StatefulSet statefulset-2022/ss2 to complete update Jan 24 00:45:30.448: INFO: Waiting for Pod statefulset-2022/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Jan 24 00:45:40.447: INFO: Deleting all statefulset in ns statefulset-2022 Jan 24 00:45:40.452: INFO: Scaling statefulset ss2 to 0 Jan 24 00:46:10.504: INFO: Waiting for statefulset status.replicas updated to 0 Jan 24 00:46:10.512: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:46:10.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2022" for this suite. • [SLOW TEST:212.530 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":278,"completed":170,"skipped":2616,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:46:10.591: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Performing setup for networking test in namespace pod-network-test-756 STEP: creating a selector STEP: Creating the service pods in kubernetes Jan 24 00:46:10.784: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jan 24 00:46:47.102: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.44.0.1 8081 | grep -v '^\s*$'] Namespace:pod-network-test-756 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 24 00:46:47.102: INFO: >>> kubeConfig: /root/.kube/config I0124 00:46:47.155423 8 log.go:172] (0xc00293a4d0) (0xc001ed5c20) Create stream I0124 00:46:47.155507 8 log.go:172] (0xc00293a4d0) (0xc001ed5c20) Stream added, broadcasting: 1 I0124 00:46:47.158888 8 log.go:172] (0xc00293a4d0) Reply frame received for 1 I0124 00:46:47.158932 8 log.go:172] (0xc00293a4d0) (0xc001ed5cc0) Create stream I0124 00:46:47.158943 8 log.go:172] (0xc00293a4d0) (0xc001ed5cc0) Stream added, broadcasting: 3 I0124 00:46:47.160742 8 log.go:172] (0xc00293a4d0) Reply frame received for 3 I0124 00:46:47.160775 8 log.go:172] (0xc00293a4d0) (0xc001aefe00) Create stream I0124 00:46:47.160788 8 log.go:172] (0xc00293a4d0) (0xc001aefe00) Stream added, broadcasting: 5 I0124 00:46:47.162429 8 log.go:172] (0xc00293a4d0) Reply frame received for 5 I0124 00:46:48.235180 8 log.go:172] (0xc00293a4d0) Data frame received for 3 I0124 00:46:48.235238 8 log.go:172] (0xc001ed5cc0) (3) Data frame handling I0124 00:46:48.235259 8 log.go:172] (0xc001ed5cc0) (3) Data frame sent I0124 00:46:48.366156 8 log.go:172] (0xc00293a4d0) (0xc001ed5cc0) Stream removed, broadcasting: 3 I0124 00:46:48.366443 8 log.go:172] (0xc00293a4d0) Data frame received for 1 I0124 00:46:48.366470 8 log.go:172] (0xc001ed5c20) (1) Data frame handling I0124 00:46:48.366492 8 log.go:172] (0xc001ed5c20) (1) Data frame sent I0124 00:46:48.366517 8 log.go:172] (0xc00293a4d0) (0xc001ed5c20) Stream removed, broadcasting: 1 I0124 00:46:48.366997 8 log.go:172] (0xc00293a4d0) (0xc001aefe00) Stream removed, broadcasting: 5 I0124 00:46:48.367043 8 log.go:172] (0xc00293a4d0) (0xc001ed5c20) Stream removed, broadcasting: 1 I0124 00:46:48.367422 8 log.go:172] (0xc00293a4d0) (0xc001ed5cc0) Stream removed, broadcasting: 3 I0124 00:46:48.367511 8 log.go:172] (0xc00293a4d0) (0xc001aefe00) Stream removed, broadcasting: 5 I0124 00:46:48.368051 8 log.go:172] (0xc00293a4d0) Go away received Jan 24 00:46:48.368: INFO: Found all expected endpoints: [netserver-0] Jan 24 00:46:48.380: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:pod-network-test-756 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 24 00:46:48.380: INFO: >>> kubeConfig: /root/.kube/config I0124 00:46:49.298661 8 log.go:172] (0xc002d24580) (0xc000cc45a0) Create stream I0124 00:46:49.298779 8 log.go:172] (0xc002d24580) (0xc000cc45a0) Stream added, broadcasting: 1 I0124 00:46:49.303230 8 log.go:172] (0xc002d24580) Reply frame received for 1 I0124 00:46:49.303276 8 log.go:172] (0xc002d24580) (0xc00204c1e0) Create stream I0124 00:46:49.303293 8 log.go:172] (0xc002d24580) (0xc00204c1e0) Stream added, broadcasting: 3 I0124 00:46:49.305508 8 log.go:172] (0xc002d24580) Reply frame received for 3 I0124 00:46:49.305544 8 log.go:172] (0xc002d24580) (0xc001e840a0) Create stream I0124 00:46:49.305566 8 log.go:172] (0xc002d24580) (0xc001e840a0) Stream added, broadcasting: 5 I0124 00:46:49.309459 8 log.go:172] (0xc002d24580) Reply frame received for 5 I0124 00:46:50.453190 8 log.go:172] (0xc002d24580) Data frame received for 3 I0124 00:46:50.453286 8 log.go:172] (0xc00204c1e0) (3) Data frame handling I0124 00:46:50.453324 8 log.go:172] (0xc00204c1e0) (3) Data frame sent I0124 00:46:50.580454 8 log.go:172] (0xc002d24580) (0xc001e840a0) Stream removed, broadcasting: 5 I0124 00:46:50.580929 8 log.go:172] (0xc002d24580) (0xc00204c1e0) Stream removed, broadcasting: 3 I0124 00:46:50.581047 8 log.go:172] (0xc002d24580) Data frame received for 1 I0124 00:46:50.581171 8 log.go:172] (0xc000cc45a0) (1) Data frame handling I0124 00:46:50.581259 8 log.go:172] (0xc000cc45a0) (1) Data frame sent I0124 00:46:50.581584 8 log.go:172] (0xc002d24580) (0xc000cc45a0) Stream removed, broadcasting: 1 I0124 00:46:50.581626 8 log.go:172] (0xc002d24580) Go away received I0124 00:46:50.581737 8 log.go:172] (0xc002d24580) (0xc000cc45a0) Stream removed, broadcasting: 1 I0124 00:46:50.581790 8 log.go:172] (0xc002d24580) (0xc00204c1e0) Stream removed, broadcasting: 3 I0124 00:46:50.581858 8 log.go:172] (0xc002d24580) (0xc001e840a0) Stream removed, broadcasting: 5 Jan 24 00:46:50.581: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:46:50.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-756" for this suite. • [SLOW TEST:40.012 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":171,"skipped":2626,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:46:50.605: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating pod pod-subpath-test-downwardapi-8smv STEP: Creating a pod to test atomic-volume-subpath Jan 24 00:46:50.757: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-8smv" in namespace "subpath-5558" to be "success or failure" Jan 24 00:46:50.836: INFO: Pod "pod-subpath-test-downwardapi-8smv": Phase="Pending", Reason="", readiness=false. Elapsed: 79.253084ms Jan 24 00:46:52.842: INFO: Pod "pod-subpath-test-downwardapi-8smv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.084890302s Jan 24 00:46:54.863: INFO: Pod "pod-subpath-test-downwardapi-8smv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.106376473s Jan 24 00:46:57.783: INFO: Pod "pod-subpath-test-downwardapi-8smv": Phase="Pending", Reason="", readiness=false. Elapsed: 7.026489894s Jan 24 00:46:59.861: INFO: Pod "pod-subpath-test-downwardapi-8smv": Phase="Pending", Reason="", readiness=false. Elapsed: 9.104487865s Jan 24 00:47:01.876: INFO: Pod "pod-subpath-test-downwardapi-8smv": Phase="Pending", Reason="", readiness=false. Elapsed: 11.119613397s Jan 24 00:47:03.881: INFO: Pod "pod-subpath-test-downwardapi-8smv": Phase="Running", Reason="", readiness=true. Elapsed: 13.12461588s Jan 24 00:47:05.887: INFO: Pod "pod-subpath-test-downwardapi-8smv": Phase="Running", Reason="", readiness=true. Elapsed: 15.130394813s Jan 24 00:47:07.894: INFO: Pod "pod-subpath-test-downwardapi-8smv": Phase="Running", Reason="", readiness=true. Elapsed: 17.137135228s Jan 24 00:47:09.900: INFO: Pod "pod-subpath-test-downwardapi-8smv": Phase="Running", Reason="", readiness=true. Elapsed: 19.142926915s Jan 24 00:47:11.906: INFO: Pod "pod-subpath-test-downwardapi-8smv": Phase="Running", Reason="", readiness=true. Elapsed: 21.148752145s Jan 24 00:47:13.925: INFO: Pod "pod-subpath-test-downwardapi-8smv": Phase="Running", Reason="", readiness=true. Elapsed: 23.168135904s Jan 24 00:47:15.931: INFO: Pod "pod-subpath-test-downwardapi-8smv": Phase="Running", Reason="", readiness=true. Elapsed: 25.174199688s Jan 24 00:47:17.937: INFO: Pod "pod-subpath-test-downwardapi-8smv": Phase="Running", Reason="", readiness=true. Elapsed: 27.179917306s Jan 24 00:47:19.943: INFO: Pod "pod-subpath-test-downwardapi-8smv": Phase="Running", Reason="", readiness=true. Elapsed: 29.186173091s Jan 24 00:47:21.947: INFO: Pod "pod-subpath-test-downwardapi-8smv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 31.190565336s STEP: Saw pod success Jan 24 00:47:21.947: INFO: Pod "pod-subpath-test-downwardapi-8smv" satisfied condition "success or failure" Jan 24 00:47:21.949: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-downwardapi-8smv container test-container-subpath-downwardapi-8smv: STEP: delete the pod Jan 24 00:47:22.084: INFO: Waiting for pod pod-subpath-test-downwardapi-8smv to disappear Jan 24 00:47:22.103: INFO: Pod pod-subpath-test-downwardapi-8smv no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-8smv Jan 24 00:47:22.103: INFO: Deleting pod "pod-subpath-test-downwardapi-8smv" in namespace "subpath-5558" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:47:22.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5558" for this suite. • [SLOW TEST:31.511 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":278,"completed":172,"skipped":2667,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:47:22.116: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:279 [BeforeEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1693 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: running the image docker.io/library/httpd:2.4.38-alpine Jan 24 00:47:22.312: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-3103' Jan 24 00:47:22.499: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 24 00:47:22.500: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created Jan 24 00:47:22.512: INFO: Waiting for rc e2e-test-httpd-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller Jan 24 00:47:22.548: INFO: scanned /root for discovery docs: Jan 24 00:47:22.548: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-httpd-rc --update-period=1s --image=docker.io/library/httpd:2.4.38-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-3103' Jan 24 00:47:47.081: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Jan 24 00:47:47.081: INFO: stdout: "Created e2e-test-httpd-rc-9ee1e464d23598124529c6daa14918cb\nScaling up e2e-test-httpd-rc-9ee1e464d23598124529c6daa14918cb from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-9ee1e464d23598124529c6daa14918cb up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-9ee1e464d23598124529c6daa14918cb to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" Jan 24 00:47:47.081: INFO: stdout: "Created e2e-test-httpd-rc-9ee1e464d23598124529c6daa14918cb\nScaling up e2e-test-httpd-rc-9ee1e464d23598124529c6daa14918cb from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-9ee1e464d23598124529c6daa14918cb up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-9ee1e464d23598124529c6daa14918cb to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-httpd-rc pods to come up. Jan 24 00:47:47.082: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-3103' Jan 24 00:47:47.277: INFO: stderr: "" Jan 24 00:47:47.277: INFO: stdout: "e2e-test-httpd-rc-9ee1e464d23598124529c6daa14918cb-xdf82 " Jan 24 00:47:47.278: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-9ee1e464d23598124529c6daa14918cb-xdf82 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-httpd-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3103' Jan 24 00:47:47.358: INFO: stderr: "" Jan 24 00:47:47.358: INFO: stdout: "true" Jan 24 00:47:47.358: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-9ee1e464d23598124529c6daa14918cb-xdf82 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-httpd-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3103' Jan 24 00:47:47.445: INFO: stderr: "" Jan 24 00:47:47.445: INFO: stdout: "docker.io/library/httpd:2.4.38-alpine" Jan 24 00:47:47.445: INFO: e2e-test-httpd-rc-9ee1e464d23598124529c6daa14918cb-xdf82 is verified up and running [AfterEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1699 Jan 24 00:47:47.445: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-3103' Jan 24 00:47:47.540: INFO: stderr: "" Jan 24 00:47:47.540: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:47:47.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3103" for this suite. • [SLOW TEST:25.502 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1688 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance]","total":278,"completed":173,"skipped":2680,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:47:47.618: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating the pod Jan 24 00:47:47.839: INFO: PodSpec: initContainers in spec.initContainers Jan 24 00:48:42.979: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-21350453-cc45-49b6-816a-b57bc3c50051", GenerateName:"", Namespace:"init-container-3339", SelfLink:"/api/v1/namespaces/init-container-3339/pods/pod-init-21350453-cc45-49b6-816a-b57bc3c50051", UID:"d03f7555-13db-4fba-bc07-cd77f98c80fe", ResourceVersion:"3922197", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63715423667, loc:(*time.Location)(0x7d7cf00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"839762190"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-7qz52", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0000b6f00), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-7qz52", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-7qz52", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-7qz52", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc003ce82c8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"jerma-node", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0021686c0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc003ce8350)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc003ce8370)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc003ce8378), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc003ce837c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715423669, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715423669, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715423669, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715423667, loc:(*time.Location)(0x7d7cf00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.2.250", PodIP:"10.44.0.1", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.44.0.1"}}, StartTime:(*v1.Time)(0xc0027e2760), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002c30b60)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002c30bd0)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://812792505e696936a78e99f0843006c3a3273135201b263ee724d9f15fbdc03d", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0027e28c0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0027e27c0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc003ce83ff)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:48:42.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-3339" for this suite. • [SLOW TEST:55.407 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":278,"completed":174,"skipped":2726,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:48:43.026: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test emptydir 0777 on node default medium Jan 24 00:48:43.181: INFO: Waiting up to 5m0s for pod "pod-1c1afdc1-1a7c-467a-8760-b9c4953a6aab" in namespace "emptydir-5914" to be "success or failure" Jan 24 00:48:43.191: INFO: Pod "pod-1c1afdc1-1a7c-467a-8760-b9c4953a6aab": Phase="Pending", Reason="", readiness=false. Elapsed: 9.916744ms Jan 24 00:48:45.198: INFO: Pod "pod-1c1afdc1-1a7c-467a-8760-b9c4953a6aab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016360151s Jan 24 00:48:47.203: INFO: Pod "pod-1c1afdc1-1a7c-467a-8760-b9c4953a6aab": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021507132s Jan 24 00:48:49.209: INFO: Pod "pod-1c1afdc1-1a7c-467a-8760-b9c4953a6aab": Phase="Pending", Reason="", readiness=false. Elapsed: 6.028047772s Jan 24 00:48:51.218: INFO: Pod "pod-1c1afdc1-1a7c-467a-8760-b9c4953a6aab": Phase="Pending", Reason="", readiness=false. Elapsed: 8.036865599s Jan 24 00:48:53.224: INFO: Pod "pod-1c1afdc1-1a7c-467a-8760-b9c4953a6aab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.042456007s STEP: Saw pod success Jan 24 00:48:53.224: INFO: Pod "pod-1c1afdc1-1a7c-467a-8760-b9c4953a6aab" satisfied condition "success or failure" Jan 24 00:48:53.229: INFO: Trying to get logs from node jerma-node pod pod-1c1afdc1-1a7c-467a-8760-b9c4953a6aab container test-container: STEP: delete the pod Jan 24 00:48:53.440: INFO: Waiting for pod pod-1c1afdc1-1a7c-467a-8760-b9c4953a6aab to disappear Jan 24 00:48:53.444: INFO: Pod pod-1c1afdc1-1a7c-467a-8760-b9c4953a6aab no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:48:53.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5914" for this suite. • [SLOW TEST:10.434 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":175,"skipped":2738,"failed":0} SSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:48:53.460: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward api env vars Jan 24 00:48:53.647: INFO: Waiting up to 5m0s for pod "downward-api-46059c0e-b72f-40f9-9205-98941574a248" in namespace "downward-api-6952" to be "success or failure" Jan 24 00:48:53.674: INFO: Pod "downward-api-46059c0e-b72f-40f9-9205-98941574a248": Phase="Pending", Reason="", readiness=false. Elapsed: 27.307878ms Jan 24 00:48:55.718: INFO: Pod "downward-api-46059c0e-b72f-40f9-9205-98941574a248": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070675388s Jan 24 00:48:57.729: INFO: Pod "downward-api-46059c0e-b72f-40f9-9205-98941574a248": Phase="Pending", Reason="", readiness=false. Elapsed: 4.082076302s Jan 24 00:48:59.741: INFO: Pod "downward-api-46059c0e-b72f-40f9-9205-98941574a248": Phase="Pending", Reason="", readiness=false. Elapsed: 6.094221566s Jan 24 00:49:01.748: INFO: Pod "downward-api-46059c0e-b72f-40f9-9205-98941574a248": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.10094094s STEP: Saw pod success Jan 24 00:49:01.748: INFO: Pod "downward-api-46059c0e-b72f-40f9-9205-98941574a248" satisfied condition "success or failure" Jan 24 00:49:01.752: INFO: Trying to get logs from node jerma-node pod downward-api-46059c0e-b72f-40f9-9205-98941574a248 container dapi-container: STEP: delete the pod Jan 24 00:49:01.786: INFO: Waiting for pod downward-api-46059c0e-b72f-40f9-9205-98941574a248 to disappear Jan 24 00:49:01.799: INFO: Pod downward-api-46059c0e-b72f-40f9-9205-98941574a248 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:49:01.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6952" for this suite. • [SLOW TEST:8.353 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":278,"completed":176,"skipped":2747,"failed":0} S ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:49:01.814: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Jan 24 00:49:02.140: INFO: Waiting up to 5m0s for pod "downwardapi-volume-de95fbee-3f37-40d5-bd3d-bfc2ee6bcb42" in namespace "downward-api-4226" to be "success or failure" Jan 24 00:49:02.149: INFO: Pod "downwardapi-volume-de95fbee-3f37-40d5-bd3d-bfc2ee6bcb42": Phase="Pending", Reason="", readiness=false. Elapsed: 8.771167ms Jan 24 00:49:04.338: INFO: Pod "downwardapi-volume-de95fbee-3f37-40d5-bd3d-bfc2ee6bcb42": Phase="Pending", Reason="", readiness=false. Elapsed: 2.19780602s Jan 24 00:49:06.344: INFO: Pod "downwardapi-volume-de95fbee-3f37-40d5-bd3d-bfc2ee6bcb42": Phase="Pending", Reason="", readiness=false. Elapsed: 4.204036902s Jan 24 00:49:08.351: INFO: Pod "downwardapi-volume-de95fbee-3f37-40d5-bd3d-bfc2ee6bcb42": Phase="Pending", Reason="", readiness=false. Elapsed: 6.211458469s Jan 24 00:49:10.388: INFO: Pod "downwardapi-volume-de95fbee-3f37-40d5-bd3d-bfc2ee6bcb42": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.24807364s STEP: Saw pod success Jan 24 00:49:10.388: INFO: Pod "downwardapi-volume-de95fbee-3f37-40d5-bd3d-bfc2ee6bcb42" satisfied condition "success or failure" Jan 24 00:49:10.391: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-de95fbee-3f37-40d5-bd3d-bfc2ee6bcb42 container client-container: STEP: delete the pod Jan 24 00:49:10.433: INFO: Waiting for pod downwardapi-volume-de95fbee-3f37-40d5-bd3d-bfc2ee6bcb42 to disappear Jan 24 00:49:10.438: INFO: Pod downwardapi-volume-de95fbee-3f37-40d5-bd3d-bfc2ee6bcb42 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:49:10.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4226" for this suite. • [SLOW TEST:8.642 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":177,"skipped":2748,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:49:10.456: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:87 Jan 24 00:49:10.638: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 24 00:49:10.786: INFO: Waiting for terminating namespaces to be deleted... Jan 24 00:49:10.789: INFO: Logging pods the kubelet thinks is on node jerma-node before test Jan 24 00:49:10.797: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded) Jan 24 00:49:10.797: INFO: Container weave ready: true, restart count 1 Jan 24 00:49:10.797: INFO: Container weave-npc ready: true, restart count 0 Jan 24 00:49:10.797: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded) Jan 24 00:49:10.797: INFO: Container kube-proxy ready: true, restart count 0 Jan 24 00:49:10.797: INFO: Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test Jan 24 00:49:10.818: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded) Jan 24 00:49:10.818: INFO: Container kube-proxy ready: true, restart count 0 Jan 24 00:49:10.818: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded) Jan 24 00:49:10.818: INFO: Container weave ready: true, restart count 0 Jan 24 00:49:10.818: INFO: Container weave-npc ready: true, restart count 0 Jan 24 00:49:10.818: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded) Jan 24 00:49:10.818: INFO: Container kube-controller-manager ready: true, restart count 3 Jan 24 00:49:10.818: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded) Jan 24 00:49:10.818: INFO: Container kube-scheduler ready: true, restart count 3 Jan 24 00:49:10.818: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded) Jan 24 00:49:10.818: INFO: Container etcd ready: true, restart count 1 Jan 24 00:49:10.818: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded) Jan 24 00:49:10.818: INFO: Container kube-apiserver ready: true, restart count 1 Jan 24 00:49:10.818: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded) Jan 24 00:49:10.818: INFO: Container coredns ready: true, restart count 0 Jan 24 00:49:10.818: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded) Jan 24 00:49:10.818: INFO: Container coredns ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-de3d8722-71f4-402d-93b0-4747c9fdde0a 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-de3d8722-71f4-402d-93b0-4747c9fdde0a off the node jerma-node STEP: verifying the node doesn't have the label kubernetes.io/e2e-de3d8722-71f4-402d-93b0-4747c9fdde0a [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:49:29.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4197" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:78 • [SLOW TEST:18.678 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:39 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":278,"completed":178,"skipped":2769,"failed":0} SSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:49:29.135: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:279 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:331 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating a replication controller Jan 24 00:49:29.351: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4092' Jan 24 00:49:29.849: INFO: stderr: "" Jan 24 00:49:29.849: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 24 00:49:29.849: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4092' Jan 24 00:49:30.148: INFO: stderr: "" Jan 24 00:49:30.149: INFO: stdout: "update-demo-nautilus-q5cb6 update-demo-nautilus-rmz8q " Jan 24 00:49:30.149: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-q5cb6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4092' Jan 24 00:49:30.268: INFO: stderr: "" Jan 24 00:49:30.269: INFO: stdout: "" Jan 24 00:49:30.269: INFO: update-demo-nautilus-q5cb6 is created but not running Jan 24 00:49:35.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4092' Jan 24 00:49:35.758: INFO: stderr: "" Jan 24 00:49:35.758: INFO: stdout: "update-demo-nautilus-q5cb6 update-demo-nautilus-rmz8q " Jan 24 00:49:35.759: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-q5cb6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4092' Jan 24 00:49:36.398: INFO: stderr: "" Jan 24 00:49:36.398: INFO: stdout: "" Jan 24 00:49:36.398: INFO: update-demo-nautilus-q5cb6 is created but not running Jan 24 00:49:41.398: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4092' Jan 24 00:49:41.533: INFO: stderr: "" Jan 24 00:49:41.533: INFO: stdout: "update-demo-nautilus-q5cb6 update-demo-nautilus-rmz8q " Jan 24 00:49:41.533: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-q5cb6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4092' Jan 24 00:49:41.623: INFO: stderr: "" Jan 24 00:49:41.623: INFO: stdout: "true" Jan 24 00:49:41.623: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-q5cb6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4092' Jan 24 00:49:41.697: INFO: stderr: "" Jan 24 00:49:41.697: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 24 00:49:41.697: INFO: validating pod update-demo-nautilus-q5cb6 Jan 24 00:49:41.706: INFO: got data: { "image": "nautilus.jpg" } Jan 24 00:49:41.706: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 24 00:49:41.706: INFO: update-demo-nautilus-q5cb6 is verified up and running Jan 24 00:49:41.706: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rmz8q -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4092' Jan 24 00:49:41.814: INFO: stderr: "" Jan 24 00:49:41.814: INFO: stdout: "true" Jan 24 00:49:41.814: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rmz8q -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4092' Jan 24 00:49:41.895: INFO: stderr: "" Jan 24 00:49:41.895: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 24 00:49:41.895: INFO: validating pod update-demo-nautilus-rmz8q Jan 24 00:49:41.917: INFO: got data: { "image": "nautilus.jpg" } Jan 24 00:49:41.917: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 24 00:49:41.917: INFO: update-demo-nautilus-rmz8q is verified up and running STEP: scaling down the replication controller Jan 24 00:49:41.918: INFO: scanned /root for discovery docs: Jan 24 00:49:41.918: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-4092' Jan 24 00:49:43.036: INFO: stderr: "" Jan 24 00:49:43.036: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 24 00:49:43.037: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4092' Jan 24 00:49:43.178: INFO: stderr: "" Jan 24 00:49:43.178: INFO: stdout: "update-demo-nautilus-q5cb6 update-demo-nautilus-rmz8q " STEP: Replicas for name=update-demo: expected=1 actual=2 Jan 24 00:49:48.178: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4092' Jan 24 00:49:48.353: INFO: stderr: "" Jan 24 00:49:48.354: INFO: stdout: "update-demo-nautilus-q5cb6 update-demo-nautilus-rmz8q " STEP: Replicas for name=update-demo: expected=1 actual=2 Jan 24 00:49:53.354: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4092' Jan 24 00:49:53.575: INFO: stderr: "" Jan 24 00:49:53.575: INFO: stdout: "update-demo-nautilus-rmz8q " Jan 24 00:49:53.575: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rmz8q -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4092' Jan 24 00:49:53.838: INFO: stderr: "" Jan 24 00:49:53.838: INFO: stdout: "true" Jan 24 00:49:53.838: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rmz8q -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4092' Jan 24 00:49:53.992: INFO: stderr: "" Jan 24 00:49:53.992: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 24 00:49:53.992: INFO: validating pod update-demo-nautilus-rmz8q Jan 24 00:49:53.996: INFO: got data: { "image": "nautilus.jpg" } Jan 24 00:49:53.996: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 24 00:49:53.996: INFO: update-demo-nautilus-rmz8q is verified up and running STEP: scaling up the replication controller Jan 24 00:49:54.000: INFO: scanned /root for discovery docs: Jan 24 00:49:54.000: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-4092' Jan 24 00:49:55.118: INFO: stderr: "" Jan 24 00:49:55.118: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 24 00:49:55.118: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4092' Jan 24 00:49:55.315: INFO: stderr: "" Jan 24 00:49:55.315: INFO: stdout: "update-demo-nautilus-jx6k7 update-demo-nautilus-rmz8q " Jan 24 00:49:55.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jx6k7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4092' Jan 24 00:49:55.450: INFO: stderr: "" Jan 24 00:49:55.450: INFO: stdout: "" Jan 24 00:49:55.450: INFO: update-demo-nautilus-jx6k7 is created but not running Jan 24 00:50:00.451: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4092' Jan 24 00:50:00.636: INFO: stderr: "" Jan 24 00:50:00.636: INFO: stdout: "update-demo-nautilus-jx6k7 update-demo-nautilus-rmz8q " Jan 24 00:50:00.637: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jx6k7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4092' Jan 24 00:50:00.736: INFO: stderr: "" Jan 24 00:50:00.736: INFO: stdout: "true" Jan 24 00:50:00.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jx6k7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4092' Jan 24 00:50:00.857: INFO: stderr: "" Jan 24 00:50:00.857: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 24 00:50:00.857: INFO: validating pod update-demo-nautilus-jx6k7 Jan 24 00:50:00.869: INFO: got data: { "image": "nautilus.jpg" } Jan 24 00:50:00.869: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 24 00:50:00.869: INFO: update-demo-nautilus-jx6k7 is verified up and running Jan 24 00:50:00.869: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rmz8q -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4092' Jan 24 00:50:01.021: INFO: stderr: "" Jan 24 00:50:01.021: INFO: stdout: "true" Jan 24 00:50:01.022: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rmz8q -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4092' Jan 24 00:50:01.109: INFO: stderr: "" Jan 24 00:50:01.109: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 24 00:50:01.109: INFO: validating pod update-demo-nautilus-rmz8q Jan 24 00:50:01.113: INFO: got data: { "image": "nautilus.jpg" } Jan 24 00:50:01.113: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 24 00:50:01.113: INFO: update-demo-nautilus-rmz8q is verified up and running STEP: using delete to clean up resources Jan 24 00:50:01.114: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4092' Jan 24 00:50:01.229: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 24 00:50:01.229: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jan 24 00:50:01.229: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-4092' Jan 24 00:50:01.363: INFO: stderr: "No resources found in kubectl-4092 namespace.\n" Jan 24 00:50:01.363: INFO: stdout: "" Jan 24 00:50:01.363: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-4092 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 24 00:50:01.440: INFO: stderr: "" Jan 24 00:50:01.440: INFO: stdout: "update-demo-nautilus-jx6k7\nupdate-demo-nautilus-rmz8q\n" Jan 24 00:50:01.940: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-4092' Jan 24 00:50:02.422: INFO: stderr: "No resources found in kubectl-4092 namespace.\n" Jan 24 00:50:02.424: INFO: stdout: "" Jan 24 00:50:02.424: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-4092 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 24 00:50:02.666: INFO: stderr: "" Jan 24 00:50:02.666: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:50:02.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4092" for this suite. • [SLOW TEST:33.544 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:329 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":278,"completed":179,"skipped":2777,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:50:02.680: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating pod busybox-9a39e467-1aaf-407c-bf81-b7b898d7bb97 in namespace container-probe-4693 Jan 24 00:50:12.984: INFO: Started pod busybox-9a39e467-1aaf-407c-bf81-b7b898d7bb97 in namespace container-probe-4693 STEP: checking the pod's current state and verifying that restartCount is present Jan 24 00:50:12.989: INFO: Initial restart count of pod busybox-9a39e467-1aaf-407c-bf81-b7b898d7bb97 is 0 Jan 24 00:51:01.466: INFO: Restart count of pod container-probe-4693/busybox-9a39e467-1aaf-407c-bf81-b7b898d7bb97 is now 1 (48.476021328s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:51:01.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4693" for this suite. • [SLOW TEST:58.894 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":180,"skipped":2800,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:51:01.575: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jan 24 00:51:10.912: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:51:10.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2215" for this suite. • [SLOW TEST:9.393 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131 should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":181,"skipped":2818,"failed":0} SSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:51:10.968: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test env composition Jan 24 00:51:11.123: INFO: Waiting up to 5m0s for pod "var-expansion-05f0e35a-e1fe-480b-aef1-ed071a73e19a" in namespace "var-expansion-3866" to be "success or failure" Jan 24 00:51:11.142: INFO: Pod "var-expansion-05f0e35a-e1fe-480b-aef1-ed071a73e19a": Phase="Pending", Reason="", readiness=false. Elapsed: 19.350449ms Jan 24 00:51:13.148: INFO: Pod "var-expansion-05f0e35a-e1fe-480b-aef1-ed071a73e19a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025198708s Jan 24 00:51:15.599: INFO: Pod "var-expansion-05f0e35a-e1fe-480b-aef1-ed071a73e19a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.475535911s Jan 24 00:51:17.609: INFO: Pod "var-expansion-05f0e35a-e1fe-480b-aef1-ed071a73e19a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.485933311s Jan 24 00:51:19.615: INFO: Pod "var-expansion-05f0e35a-e1fe-480b-aef1-ed071a73e19a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.491969134s STEP: Saw pod success Jan 24 00:51:19.615: INFO: Pod "var-expansion-05f0e35a-e1fe-480b-aef1-ed071a73e19a" satisfied condition "success or failure" Jan 24 00:51:19.617: INFO: Trying to get logs from node jerma-node pod var-expansion-05f0e35a-e1fe-480b-aef1-ed071a73e19a container dapi-container: STEP: delete the pod Jan 24 00:51:19.674: INFO: Waiting for pod var-expansion-05f0e35a-e1fe-480b-aef1-ed071a73e19a to disappear Jan 24 00:51:19.682: INFO: Pod var-expansion-05f0e35a-e1fe-480b-aef1-ed071a73e19a no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:51:19.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3866" for this suite. • [SLOW TEST:8.777 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":278,"completed":182,"skipped":2821,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:51:19.746: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Performing setup for networking test in namespace pod-network-test-2737 STEP: creating a selector STEP: Creating the service pods in kubernetes Jan 24 00:51:19.913: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jan 24 00:51:56.033: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostname&protocol=udp&host=10.44.0.1&port=8081&tries=1'] Namespace:pod-network-test-2737 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 24 00:51:56.033: INFO: >>> kubeConfig: /root/.kube/config I0124 00:51:56.071937 8 log.go:172] (0xc00293a6e0) (0xc001ed4500) Create stream I0124 00:51:56.071968 8 log.go:172] (0xc00293a6e0) (0xc001ed4500) Stream added, broadcasting: 1 I0124 00:51:56.074278 8 log.go:172] (0xc00293a6e0) Reply frame received for 1 I0124 00:51:56.074300 8 log.go:172] (0xc00293a6e0) (0xc001aa4d20) Create stream I0124 00:51:56.074307 8 log.go:172] (0xc00293a6e0) (0xc001aa4d20) Stream added, broadcasting: 3 I0124 00:51:56.075498 8 log.go:172] (0xc00293a6e0) Reply frame received for 3 I0124 00:51:56.075525 8 log.go:172] (0xc00293a6e0) (0xc001e840a0) Create stream I0124 00:51:56.075534 8 log.go:172] (0xc00293a6e0) (0xc001e840a0) Stream added, broadcasting: 5 I0124 00:51:56.076601 8 log.go:172] (0xc00293a6e0) Reply frame received for 5 I0124 00:51:56.145889 8 log.go:172] (0xc00293a6e0) Data frame received for 3 I0124 00:51:56.145939 8 log.go:172] (0xc001aa4d20) (3) Data frame handling I0124 00:51:56.145953 8 log.go:172] (0xc001aa4d20) (3) Data frame sent I0124 00:51:56.204767 8 log.go:172] (0xc00293a6e0) (0xc001aa4d20) Stream removed, broadcasting: 3 I0124 00:51:56.204855 8 log.go:172] (0xc00293a6e0) Data frame received for 1 I0124 00:51:56.204880 8 log.go:172] (0xc001ed4500) (1) Data frame handling I0124 00:51:56.204891 8 log.go:172] (0xc001ed4500) (1) Data frame sent I0124 00:51:56.204900 8 log.go:172] (0xc00293a6e0) (0xc001ed4500) Stream removed, broadcasting: 1 I0124 00:51:56.205070 8 log.go:172] (0xc00293a6e0) (0xc001e840a0) Stream removed, broadcasting: 5 I0124 00:51:56.205087 8 log.go:172] (0xc00293a6e0) (0xc001ed4500) Stream removed, broadcasting: 1 I0124 00:51:56.205093 8 log.go:172] (0xc00293a6e0) (0xc001aa4d20) Stream removed, broadcasting: 3 I0124 00:51:56.205098 8 log.go:172] (0xc00293a6e0) (0xc001e840a0) Stream removed, broadcasting: 5 Jan 24 00:51:56.205: INFO: Waiting for responses: map[] I0124 00:51:56.205815 8 log.go:172] (0xc00293a6e0) Go away received Jan 24 00:51:56.209: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostname&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:pod-network-test-2737 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 24 00:51:56.209: INFO: >>> kubeConfig: /root/.kube/config I0124 00:51:56.258942 8 log.go:172] (0xc00199a2c0) (0xc001aa5860) Create stream I0124 00:51:56.258983 8 log.go:172] (0xc00199a2c0) (0xc001aa5860) Stream added, broadcasting: 1 I0124 00:51:56.261589 8 log.go:172] (0xc00199a2c0) Reply frame received for 1 I0124 00:51:56.261638 8 log.go:172] (0xc00199a2c0) (0xc00204d4a0) Create stream I0124 00:51:56.261654 8 log.go:172] (0xc00199a2c0) (0xc00204d4a0) Stream added, broadcasting: 3 I0124 00:51:56.263624 8 log.go:172] (0xc00199a2c0) Reply frame received for 3 I0124 00:51:56.263651 8 log.go:172] (0xc00199a2c0) (0xc001e843c0) Create stream I0124 00:51:56.263658 8 log.go:172] (0xc00199a2c0) (0xc001e843c0) Stream added, broadcasting: 5 I0124 00:51:56.265648 8 log.go:172] (0xc00199a2c0) Reply frame received for 5 I0124 00:51:56.349153 8 log.go:172] (0xc00199a2c0) Data frame received for 3 I0124 00:51:56.349267 8 log.go:172] (0xc00204d4a0) (3) Data frame handling I0124 00:51:56.349319 8 log.go:172] (0xc00204d4a0) (3) Data frame sent I0124 00:51:56.410712 8 log.go:172] (0xc00199a2c0) Data frame received for 1 I0124 00:51:56.410844 8 log.go:172] (0xc00199a2c0) (0xc00204d4a0) Stream removed, broadcasting: 3 I0124 00:51:56.410882 8 log.go:172] (0xc001aa5860) (1) Data frame handling I0124 00:51:56.410898 8 log.go:172] (0xc001aa5860) (1) Data frame sent I0124 00:51:56.410908 8 log.go:172] (0xc00199a2c0) (0xc001aa5860) Stream removed, broadcasting: 1 I0124 00:51:56.411306 8 log.go:172] (0xc00199a2c0) (0xc001e843c0) Stream removed, broadcasting: 5 I0124 00:51:56.411363 8 log.go:172] (0xc00199a2c0) (0xc001aa5860) Stream removed, broadcasting: 1 I0124 00:51:56.411380 8 log.go:172] (0xc00199a2c0) (0xc00204d4a0) Stream removed, broadcasting: 3 I0124 00:51:56.411397 8 log.go:172] (0xc00199a2c0) (0xc001e843c0) Stream removed, broadcasting: 5 Jan 24 00:51:56.411: INFO: Waiting for responses: map[] I0124 00:51:56.411478 8 log.go:172] (0xc00199a2c0) Go away received [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:51:56.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-2737" for this suite. • [SLOW TEST:36.676 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":278,"completed":183,"skipped":2833,"failed":0} SSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:51:56.422: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap configmap-1799/configmap-test-6f369a9a-986b-4753-be3c-b98573ecfb1f STEP: Creating a pod to test consume configMaps Jan 24 00:51:56.550: INFO: Waiting up to 5m0s for pod "pod-configmaps-c004264b-4a49-4a6a-9a61-d46271d273bf" in namespace "configmap-1799" to be "success or failure" Jan 24 00:51:56.629: INFO: Pod "pod-configmaps-c004264b-4a49-4a6a-9a61-d46271d273bf": Phase="Pending", Reason="", readiness=false. Elapsed: 79.096828ms Jan 24 00:51:58.645: INFO: Pod "pod-configmaps-c004264b-4a49-4a6a-9a61-d46271d273bf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.094972054s Jan 24 00:52:00.650: INFO: Pod "pod-configmaps-c004264b-4a49-4a6a-9a61-d46271d273bf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.100160511s Jan 24 00:52:02.656: INFO: Pod "pod-configmaps-c004264b-4a49-4a6a-9a61-d46271d273bf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.106140592s Jan 24 00:52:04.764: INFO: Pod "pod-configmaps-c004264b-4a49-4a6a-9a61-d46271d273bf": Phase="Pending", Reason="", readiness=false. Elapsed: 8.213553371s Jan 24 00:52:06.770: INFO: Pod "pod-configmaps-c004264b-4a49-4a6a-9a61-d46271d273bf": Phase="Pending", Reason="", readiness=false. Elapsed: 10.220087738s Jan 24 00:52:08.777: INFO: Pod "pod-configmaps-c004264b-4a49-4a6a-9a61-d46271d273bf": Phase="Pending", Reason="", readiness=false. Elapsed: 12.226862995s Jan 24 00:52:10.785: INFO: Pod "pod-configmaps-c004264b-4a49-4a6a-9a61-d46271d273bf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.234779824s STEP: Saw pod success Jan 24 00:52:10.785: INFO: Pod "pod-configmaps-c004264b-4a49-4a6a-9a61-d46271d273bf" satisfied condition "success or failure" Jan 24 00:52:10.790: INFO: Trying to get logs from node jerma-node pod pod-configmaps-c004264b-4a49-4a6a-9a61-d46271d273bf container env-test: STEP: delete the pod Jan 24 00:52:10.856: INFO: Waiting for pod pod-configmaps-c004264b-4a49-4a6a-9a61-d46271d273bf to disappear Jan 24 00:52:10.867: INFO: Pod pod-configmaps-c004264b-4a49-4a6a-9a61-d46271d273bf no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:52:10.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1799" for this suite. • [SLOW TEST:14.458 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":184,"skipped":2838,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:52:10.881: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 24 00:52:11.709: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 24 00:52:13.727: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715423931, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715423931, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715423931, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715423931, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 24 00:52:15.732: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715423931, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715423931, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715423931, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715423931, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 24 00:52:17.768: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715423931, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715423931, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715423931, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715423931, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 24 00:52:20.812: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API Jan 24 00:52:20.935: INFO: Waiting for webhook configuration to be ready... STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:52:21.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-353" for this suite. STEP: Destroying namespace "webhook-353-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:10.390 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":278,"completed":185,"skipped":2853,"failed":0} SSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:52:21.270: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:52:32.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7528" for this suite. • [SLOW TEST:11.272 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":278,"completed":186,"skipped":2856,"failed":0} SS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:52:32.543: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward api env vars Jan 24 00:52:32.661: INFO: Waiting up to 5m0s for pod "downward-api-e605f251-1545-43e7-9c9a-1de1d360b956" in namespace "downward-api-6393" to be "success or failure" Jan 24 00:52:32.673: INFO: Pod "downward-api-e605f251-1545-43e7-9c9a-1de1d360b956": Phase="Pending", Reason="", readiness=false. Elapsed: 12.538383ms Jan 24 00:52:34.680: INFO: Pod "downward-api-e605f251-1545-43e7-9c9a-1de1d360b956": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019722305s Jan 24 00:52:36.689: INFO: Pod "downward-api-e605f251-1545-43e7-9c9a-1de1d360b956": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028220595s Jan 24 00:52:38.699: INFO: Pod "downward-api-e605f251-1545-43e7-9c9a-1de1d360b956": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037955609s Jan 24 00:52:40.704: INFO: Pod "downward-api-e605f251-1545-43e7-9c9a-1de1d360b956": Phase="Pending", Reason="", readiness=false. Elapsed: 8.043368382s Jan 24 00:52:42.710: INFO: Pod "downward-api-e605f251-1545-43e7-9c9a-1de1d360b956": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.049285977s STEP: Saw pod success Jan 24 00:52:42.710: INFO: Pod "downward-api-e605f251-1545-43e7-9c9a-1de1d360b956" satisfied condition "success or failure" Jan 24 00:52:42.713: INFO: Trying to get logs from node jerma-node pod downward-api-e605f251-1545-43e7-9c9a-1de1d360b956 container dapi-container: STEP: delete the pod Jan 24 00:52:42.766: INFO: Waiting for pod downward-api-e605f251-1545-43e7-9c9a-1de1d360b956 to disappear Jan 24 00:52:42.828: INFO: Pod downward-api-e605f251-1545-43e7-9c9a-1de1d360b956 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:52:42.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6393" for this suite. • [SLOW TEST:10.297 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":278,"completed":187,"skipped":2858,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:52:42.841: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jan 24 00:52:43.172: INFO: Number of nodes with available pods: 0 Jan 24 00:52:43.172: INFO: Node jerma-node is running more than one daemon pod Jan 24 00:52:44.183: INFO: Number of nodes with available pods: 0 Jan 24 00:52:44.183: INFO: Node jerma-node is running more than one daemon pod Jan 24 00:52:45.594: INFO: Number of nodes with available pods: 0 Jan 24 00:52:45.594: INFO: Node jerma-node is running more than one daemon pod Jan 24 00:52:46.188: INFO: Number of nodes with available pods: 0 Jan 24 00:52:46.188: INFO: Node jerma-node is running more than one daemon pod Jan 24 00:52:47.183: INFO: Number of nodes with available pods: 0 Jan 24 00:52:47.183: INFO: Node jerma-node is running more than one daemon pod Jan 24 00:52:50.216: INFO: Number of nodes with available pods: 0 Jan 24 00:52:50.216: INFO: Node jerma-node is running more than one daemon pod Jan 24 00:52:51.201: INFO: Number of nodes with available pods: 0 Jan 24 00:52:51.201: INFO: Node jerma-node is running more than one daemon pod Jan 24 00:52:52.190: INFO: Number of nodes with available pods: 2 Jan 24 00:52:52.190: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Jan 24 00:52:52.340: INFO: Number of nodes with available pods: 1 Jan 24 00:52:52.340: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 24 00:52:53.358: INFO: Number of nodes with available pods: 1 Jan 24 00:52:53.359: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 24 00:52:54.349: INFO: Number of nodes with available pods: 1 Jan 24 00:52:54.349: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 24 00:52:55.352: INFO: Number of nodes with available pods: 1 Jan 24 00:52:55.352: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 24 00:52:56.422: INFO: Number of nodes with available pods: 1 Jan 24 00:52:56.422: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 24 00:52:57.355: INFO: Number of nodes with available pods: 1 Jan 24 00:52:57.355: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 24 00:52:58.354: INFO: Number of nodes with available pods: 1 Jan 24 00:52:58.354: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 24 00:52:59.360: INFO: Number of nodes with available pods: 1 Jan 24 00:52:59.360: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 24 00:53:00.352: INFO: Number of nodes with available pods: 1 Jan 24 00:53:00.352: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 24 00:53:01.360: INFO: Number of nodes with available pods: 1 Jan 24 00:53:01.360: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 24 00:53:02.352: INFO: Number of nodes with available pods: 1 Jan 24 00:53:02.352: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 24 00:53:03.348: INFO: Number of nodes with available pods: 1 Jan 24 00:53:03.348: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 24 00:53:04.348: INFO: Number of nodes with available pods: 1 Jan 24 00:53:04.348: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 24 00:53:05.350: INFO: Number of nodes with available pods: 1 Jan 24 00:53:05.350: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 24 00:53:07.070: INFO: Number of nodes with available pods: 1 Jan 24 00:53:07.071: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 24 00:53:07.405: INFO: Number of nodes with available pods: 1 Jan 24 00:53:07.405: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 24 00:53:08.359: INFO: Number of nodes with available pods: 1 Jan 24 00:53:08.360: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 24 00:53:09.353: INFO: Number of nodes with available pods: 2 Jan 24 00:53:09.353: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9236, will wait for the garbage collector to delete the pods Jan 24 00:53:09.484: INFO: Deleting DaemonSet.extensions daemon-set took: 66.568134ms Jan 24 00:53:09.885: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.706426ms Jan 24 00:53:23.190: INFO: Number of nodes with available pods: 0 Jan 24 00:53:23.190: INFO: Number of running nodes: 0, number of available pods: 0 Jan 24 00:53:23.193: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9236/daemonsets","resourceVersion":"3923382"},"items":null} Jan 24 00:53:23.195: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9236/pods","resourceVersion":"3923382"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:53:23.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-9236" for this suite. • [SLOW TEST:40.374 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":278,"completed":188,"skipped":2887,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:53:23.216: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Jan 24 00:53:23.381: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2d334735-fc6d-4d06-b1e2-499e9b8bd85a" in namespace "downward-api-2418" to be "success or failure" Jan 24 00:53:23.417: INFO: Pod "downwardapi-volume-2d334735-fc6d-4d06-b1e2-499e9b8bd85a": Phase="Pending", Reason="", readiness=false. Elapsed: 36.326239ms Jan 24 00:53:25.426: INFO: Pod "downwardapi-volume-2d334735-fc6d-4d06-b1e2-499e9b8bd85a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045552658s Jan 24 00:53:27.433: INFO: Pod "downwardapi-volume-2d334735-fc6d-4d06-b1e2-499e9b8bd85a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.052051311s Jan 24 00:53:29.442: INFO: Pod "downwardapi-volume-2d334735-fc6d-4d06-b1e2-499e9b8bd85a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.061108073s Jan 24 00:53:31.453: INFO: Pod "downwardapi-volume-2d334735-fc6d-4d06-b1e2-499e9b8bd85a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.072271249s STEP: Saw pod success Jan 24 00:53:31.453: INFO: Pod "downwardapi-volume-2d334735-fc6d-4d06-b1e2-499e9b8bd85a" satisfied condition "success or failure" Jan 24 00:53:31.460: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-2d334735-fc6d-4d06-b1e2-499e9b8bd85a container client-container: STEP: delete the pod Jan 24 00:53:31.739: INFO: Waiting for pod downwardapi-volume-2d334735-fc6d-4d06-b1e2-499e9b8bd85a to disappear Jan 24 00:53:31.807: INFO: Pod downwardapi-volume-2d334735-fc6d-4d06-b1e2-499e9b8bd85a no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:53:31.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2418" for this suite. • [SLOW TEST:8.619 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":189,"skipped":2935,"failed":0} SSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:53:31.836: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Jan 24 00:53:31.996: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Jan 24 00:53:35.743: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:53:36.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-3485" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":278,"completed":190,"skipped":2941,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:53:36.039: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Jan 24 00:53:37.159: INFO: >>> kubeConfig: /root/.kube/config Jan 24 00:53:40.113: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:53:52.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4716" for this suite. • [SLOW TEST:16.560 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":278,"completed":191,"skipped":2948,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:53:52.599: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Jan 24 00:53:52.805: INFO: Waiting up to 5m0s for pod "downwardapi-volume-33dbe64a-2ad8-4961-a91c-01ebc5b2d557" in namespace "downward-api-4818" to be "success or failure" Jan 24 00:53:52.811: INFO: Pod "downwardapi-volume-33dbe64a-2ad8-4961-a91c-01ebc5b2d557": Phase="Pending", Reason="", readiness=false. Elapsed: 5.376879ms Jan 24 00:53:54.816: INFO: Pod "downwardapi-volume-33dbe64a-2ad8-4961-a91c-01ebc5b2d557": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010249907s Jan 24 00:53:56.822: INFO: Pod "downwardapi-volume-33dbe64a-2ad8-4961-a91c-01ebc5b2d557": Phase="Pending", Reason="", readiness=false. Elapsed: 4.016438388s Jan 24 00:53:58.830: INFO: Pod "downwardapi-volume-33dbe64a-2ad8-4961-a91c-01ebc5b2d557": Phase="Pending", Reason="", readiness=false. Elapsed: 6.024399604s Jan 24 00:54:00.836: INFO: Pod "downwardapi-volume-33dbe64a-2ad8-4961-a91c-01ebc5b2d557": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.030403507s STEP: Saw pod success Jan 24 00:54:00.836: INFO: Pod "downwardapi-volume-33dbe64a-2ad8-4961-a91c-01ebc5b2d557" satisfied condition "success or failure" Jan 24 00:54:00.840: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-33dbe64a-2ad8-4961-a91c-01ebc5b2d557 container client-container: STEP: delete the pod Jan 24 00:54:00.883: INFO: Waiting for pod downwardapi-volume-33dbe64a-2ad8-4961-a91c-01ebc5b2d557 to disappear Jan 24 00:54:00.894: INFO: Pod downwardapi-volume-33dbe64a-2ad8-4961-a91c-01ebc5b2d557 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:54:00.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4818" for this suite. • [SLOW TEST:8.343 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":192,"skipped":2966,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:54:00.944: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating secret with name secret-test-19090c65-b1bd-4deb-b44f-92143febbac6 STEP: Creating a pod to test consume secrets Jan 24 00:54:01.189: INFO: Waiting up to 5m0s for pod "pod-secrets-03e77e4a-2981-45ae-809f-cc51642ebb37" in namespace "secrets-3634" to be "success or failure" Jan 24 00:54:01.196: INFO: Pod "pod-secrets-03e77e4a-2981-45ae-809f-cc51642ebb37": Phase="Pending", Reason="", readiness=false. Elapsed: 6.752516ms Jan 24 00:54:03.205: INFO: Pod "pod-secrets-03e77e4a-2981-45ae-809f-cc51642ebb37": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015600102s Jan 24 00:54:05.209: INFO: Pod "pod-secrets-03e77e4a-2981-45ae-809f-cc51642ebb37": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019760552s Jan 24 00:54:07.216: INFO: Pod "pod-secrets-03e77e4a-2981-45ae-809f-cc51642ebb37": Phase="Pending", Reason="", readiness=false. Elapsed: 6.026293981s Jan 24 00:54:09.221: INFO: Pod "pod-secrets-03e77e4a-2981-45ae-809f-cc51642ebb37": Phase="Pending", Reason="", readiness=false. Elapsed: 8.031316023s Jan 24 00:54:11.226: INFO: Pod "pod-secrets-03e77e4a-2981-45ae-809f-cc51642ebb37": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.036989061s STEP: Saw pod success Jan 24 00:54:11.226: INFO: Pod "pod-secrets-03e77e4a-2981-45ae-809f-cc51642ebb37" satisfied condition "success or failure" Jan 24 00:54:11.232: INFO: Trying to get logs from node jerma-node pod pod-secrets-03e77e4a-2981-45ae-809f-cc51642ebb37 container secret-volume-test: STEP: delete the pod Jan 24 00:54:11.347: INFO: Waiting for pod pod-secrets-03e77e4a-2981-45ae-809f-cc51642ebb37 to disappear Jan 24 00:54:11.358: INFO: Pod pod-secrets-03e77e4a-2981-45ae-809f-cc51642ebb37 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:54:11.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3634" for this suite. • [SLOW TEST:10.438 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":193,"skipped":2994,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:54:11.383: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating secret with name s-test-opt-del-6ab2e82e-cd13-48cb-93fb-0d6aa906a36f STEP: Creating secret with name s-test-opt-upd-6bf62d33-ee1e-412c-8590-f115aad68577 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-6ab2e82e-cd13-48cb-93fb-0d6aa906a36f STEP: Updating secret s-test-opt-upd-6bf62d33-ee1e-412c-8590-f115aad68577 STEP: Creating secret with name s-test-opt-create-ecdd63b2-486f-44ad-8e68-d70cb0699cb9 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:55:40.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1004" for this suite. • [SLOW TEST:89.435 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":194,"skipped":3003,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:55:40.819: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:55:40.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-2697" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":278,"completed":195,"skipped":3028,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:55:40.955: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test emptydir 0666 on tmpfs Jan 24 00:55:41.113: INFO: Waiting up to 5m0s for pod "pod-5c71c7bc-9fc0-4f1f-aed8-0d06c3946eed" in namespace "emptydir-7581" to be "success or failure" Jan 24 00:55:41.135: INFO: Pod "pod-5c71c7bc-9fc0-4f1f-aed8-0d06c3946eed": Phase="Pending", Reason="", readiness=false. Elapsed: 21.955405ms Jan 24 00:55:43.160: INFO: Pod "pod-5c71c7bc-9fc0-4f1f-aed8-0d06c3946eed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04673961s Jan 24 00:55:45.168: INFO: Pod "pod-5c71c7bc-9fc0-4f1f-aed8-0d06c3946eed": Phase="Pending", Reason="", readiness=false. Elapsed: 4.054279028s Jan 24 00:55:47.173: INFO: Pod "pod-5c71c7bc-9fc0-4f1f-aed8-0d06c3946eed": Phase="Pending", Reason="", readiness=false. Elapsed: 6.059934245s Jan 24 00:55:49.307: INFO: Pod "pod-5c71c7bc-9fc0-4f1f-aed8-0d06c3946eed": Phase="Pending", Reason="", readiness=false. Elapsed: 8.193637436s Jan 24 00:55:51.321: INFO: Pod "pod-5c71c7bc-9fc0-4f1f-aed8-0d06c3946eed": Phase="Pending", Reason="", readiness=false. Elapsed: 10.207534869s Jan 24 00:55:53.334: INFO: Pod "pod-5c71c7bc-9fc0-4f1f-aed8-0d06c3946eed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.220598219s STEP: Saw pod success Jan 24 00:55:53.334: INFO: Pod "pod-5c71c7bc-9fc0-4f1f-aed8-0d06c3946eed" satisfied condition "success or failure" Jan 24 00:55:53.340: INFO: Trying to get logs from node jerma-node pod pod-5c71c7bc-9fc0-4f1f-aed8-0d06c3946eed container test-container: STEP: delete the pod Jan 24 00:55:53.557: INFO: Waiting for pod pod-5c71c7bc-9fc0-4f1f-aed8-0d06c3946eed to disappear Jan 24 00:55:53.571: INFO: Pod pod-5c71c7bc-9fc0-4f1f-aed8-0d06c3946eed no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:55:53.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7581" for this suite. • [SLOW TEST:12.675 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":196,"skipped":3058,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:55:53.630: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test emptydir 0644 on tmpfs Jan 24 00:55:53.965: INFO: Waiting up to 5m0s for pod "pod-cbd9d32d-0627-49c3-bef2-99b8dd703a26" in namespace "emptydir-2300" to be "success or failure" Jan 24 00:55:54.013: INFO: Pod "pod-cbd9d32d-0627-49c3-bef2-99b8dd703a26": Phase="Pending", Reason="", readiness=false. Elapsed: 47.469273ms Jan 24 00:55:56.017: INFO: Pod "pod-cbd9d32d-0627-49c3-bef2-99b8dd703a26": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05155166s Jan 24 00:55:58.024: INFO: Pod "pod-cbd9d32d-0627-49c3-bef2-99b8dd703a26": Phase="Pending", Reason="", readiness=false. Elapsed: 4.058371569s Jan 24 00:56:00.029: INFO: Pod "pod-cbd9d32d-0627-49c3-bef2-99b8dd703a26": Phase="Pending", Reason="", readiness=false. Elapsed: 6.06390521s Jan 24 00:56:02.036: INFO: Pod "pod-cbd9d32d-0627-49c3-bef2-99b8dd703a26": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.071323081s STEP: Saw pod success Jan 24 00:56:02.037: INFO: Pod "pod-cbd9d32d-0627-49c3-bef2-99b8dd703a26" satisfied condition "success or failure" Jan 24 00:56:02.040: INFO: Trying to get logs from node jerma-node pod pod-cbd9d32d-0627-49c3-bef2-99b8dd703a26 container test-container: STEP: delete the pod Jan 24 00:56:02.392: INFO: Waiting for pod pod-cbd9d32d-0627-49c3-bef2-99b8dd703a26 to disappear Jan 24 00:56:02.416: INFO: Pod pod-cbd9d32d-0627-49c3-bef2-99b8dd703a26 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:56:02.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2300" for this suite. • [SLOW TEST:8.844 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":197,"skipped":3069,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:56:02.475: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test substitution in container's args Jan 24 00:56:02.657: INFO: Waiting up to 5m0s for pod "var-expansion-fc5f42f2-082f-40a3-8c48-671351fb659e" in namespace "var-expansion-2392" to be "success or failure" Jan 24 00:56:02.721: INFO: Pod "var-expansion-fc5f42f2-082f-40a3-8c48-671351fb659e": Phase="Pending", Reason="", readiness=false. Elapsed: 63.746794ms Jan 24 00:56:04.727: INFO: Pod "var-expansion-fc5f42f2-082f-40a3-8c48-671351fb659e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070406377s Jan 24 00:56:06.731: INFO: Pod "var-expansion-fc5f42f2-082f-40a3-8c48-671351fb659e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.073795571s Jan 24 00:56:08.734: INFO: Pod "var-expansion-fc5f42f2-082f-40a3-8c48-671351fb659e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.077551807s Jan 24 00:56:10.742: INFO: Pod "var-expansion-fc5f42f2-082f-40a3-8c48-671351fb659e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.08504289s STEP: Saw pod success Jan 24 00:56:10.742: INFO: Pod "var-expansion-fc5f42f2-082f-40a3-8c48-671351fb659e" satisfied condition "success or failure" Jan 24 00:56:10.746: INFO: Trying to get logs from node jerma-node pod var-expansion-fc5f42f2-082f-40a3-8c48-671351fb659e container dapi-container: STEP: delete the pod Jan 24 00:56:10.791: INFO: Waiting for pod var-expansion-fc5f42f2-082f-40a3-8c48-671351fb659e to disappear Jan 24 00:56:10.829: INFO: Pod var-expansion-fc5f42f2-082f-40a3-8c48-671351fb659e no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:56:10.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-2392" for this suite. • [SLOW TEST:8.362 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":278,"completed":198,"skipped":3081,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:56:10.837: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1592.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1592.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 24 00:56:22.999: INFO: DNS probes using dns-1592/dns-test-9a3714fd-2455-462e-80bd-3632eb5ea1de succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:56:23.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1592" for this suite. • [SLOW TEST:12.369 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":278,"completed":199,"skipped":3097,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:56:23.207: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:87 Jan 24 00:56:23.265: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 24 00:56:23.283: INFO: Waiting for terminating namespaces to be deleted... Jan 24 00:56:23.285: INFO: Logging pods the kubelet thinks is on node jerma-node before test Jan 24 00:56:23.292: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded) Jan 24 00:56:23.292: INFO: Container kube-proxy ready: true, restart count 0 Jan 24 00:56:23.292: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded) Jan 24 00:56:23.292: INFO: Container weave ready: true, restart count 1 Jan 24 00:56:23.292: INFO: Container weave-npc ready: true, restart count 0 Jan 24 00:56:23.292: INFO: Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test Jan 24 00:56:23.313: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded) Jan 24 00:56:23.313: INFO: Container kube-apiserver ready: true, restart count 1 Jan 24 00:56:23.313: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded) Jan 24 00:56:23.313: INFO: Container etcd ready: true, restart count 1 Jan 24 00:56:23.313: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded) Jan 24 00:56:23.313: INFO: Container coredns ready: true, restart count 0 Jan 24 00:56:23.313: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded) Jan 24 00:56:23.313: INFO: Container coredns ready: true, restart count 0 Jan 24 00:56:23.313: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded) Jan 24 00:56:23.313: INFO: Container kube-controller-manager ready: true, restart count 3 Jan 24 00:56:23.313: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded) Jan 24 00:56:23.313: INFO: Container kube-proxy ready: true, restart count 0 Jan 24 00:56:23.313: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded) Jan 24 00:56:23.313: INFO: Container weave ready: true, restart count 0 Jan 24 00:56:23.314: INFO: Container weave-npc ready: true, restart count 0 Jan 24 00:56:23.314: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded) Jan 24 00:56:23.314: INFO: Container kube-scheduler ready: true, restart count 3 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.15ecaca2891e2bea], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 node(s) didn't match node selector.] STEP: Considering event: Type = [Warning], Name = [restricted-pod.15ecaca28ba3df55], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:56:24.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1655" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:78 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":278,"completed":200,"skipped":3122,"failed":0} ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:56:24.411: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating the pod Jan 24 00:56:24.716: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:56:39.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-8274" for this suite. • [SLOW TEST:15.297 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":278,"completed":201,"skipped":3122,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:56:39.709: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:56:39.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-129" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":278,"completed":202,"skipped":3135,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:56:39.919: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: set up a multi version CRD Jan 24 00:56:40.049: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:56:57.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6339" for this suite. • [SLOW TEST:17.426 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":278,"completed":203,"skipped":3141,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:56:57.346: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating pod pod-subpath-test-secret-mv7t STEP: Creating a pod to test atomic-volume-subpath Jan 24 00:56:57.505: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-mv7t" in namespace "subpath-7750" to be "success or failure" Jan 24 00:56:57.512: INFO: Pod "pod-subpath-test-secret-mv7t": Phase="Pending", Reason="", readiness=false. Elapsed: 7.050135ms Jan 24 00:56:59.518: INFO: Pod "pod-subpath-test-secret-mv7t": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012695606s Jan 24 00:57:01.524: INFO: Pod "pod-subpath-test-secret-mv7t": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019244742s Jan 24 00:57:03.529: INFO: Pod "pod-subpath-test-secret-mv7t": Phase="Pending", Reason="", readiness=false. Elapsed: 6.024244714s Jan 24 00:57:05.535: INFO: Pod "pod-subpath-test-secret-mv7t": Phase="Running", Reason="", readiness=true. Elapsed: 8.029780234s Jan 24 00:57:07.539: INFO: Pod "pod-subpath-test-secret-mv7t": Phase="Running", Reason="", readiness=true. Elapsed: 10.034384385s Jan 24 00:57:09.547: INFO: Pod "pod-subpath-test-secret-mv7t": Phase="Running", Reason="", readiness=true. Elapsed: 12.042294283s Jan 24 00:57:11.553: INFO: Pod "pod-subpath-test-secret-mv7t": Phase="Running", Reason="", readiness=true. Elapsed: 14.048364042s Jan 24 00:57:13.557: INFO: Pod "pod-subpath-test-secret-mv7t": Phase="Running", Reason="", readiness=true. Elapsed: 16.0523154s Jan 24 00:57:15.564: INFO: Pod "pod-subpath-test-secret-mv7t": Phase="Running", Reason="", readiness=true. Elapsed: 18.058787372s Jan 24 00:57:17.567: INFO: Pod "pod-subpath-test-secret-mv7t": Phase="Running", Reason="", readiness=true. Elapsed: 20.062595624s Jan 24 00:57:19.572: INFO: Pod "pod-subpath-test-secret-mv7t": Phase="Running", Reason="", readiness=true. Elapsed: 22.067249982s Jan 24 00:57:21.601: INFO: Pod "pod-subpath-test-secret-mv7t": Phase="Running", Reason="", readiness=true. Elapsed: 24.096481874s Jan 24 00:57:23.623: INFO: Pod "pod-subpath-test-secret-mv7t": Phase="Running", Reason="", readiness=true. Elapsed: 26.118006443s Jan 24 00:57:25.628: INFO: Pod "pod-subpath-test-secret-mv7t": Phase="Running", Reason="", readiness=true. Elapsed: 28.122850841s Jan 24 00:57:27.632: INFO: Pod "pod-subpath-test-secret-mv7t": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.127079441s STEP: Saw pod success Jan 24 00:57:27.632: INFO: Pod "pod-subpath-test-secret-mv7t" satisfied condition "success or failure" Jan 24 00:57:27.639: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-secret-mv7t container test-container-subpath-secret-mv7t: STEP: delete the pod Jan 24 00:57:27.714: INFO: Waiting for pod pod-subpath-test-secret-mv7t to disappear Jan 24 00:57:27.906: INFO: Pod pod-subpath-test-secret-mv7t no longer exists STEP: Deleting pod pod-subpath-test-secret-mv7t Jan 24 00:57:27.906: INFO: Deleting pod "pod-subpath-test-secret-mv7t" in namespace "subpath-7750" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 00:57:27.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7750" for this suite. • [SLOW TEST:30.588 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":278,"completed":204,"skipped":3166,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 00:57:27.934: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Jan 24 00:57:28.403: INFO: (0) /api/v1/nodes/jerma-node/proxy/logs/:
alternatives.log
apt/
... (200; 7.626405ms)
Jan 24 00:57:28.407: INFO: (1) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
apt/
... (200; 4.079211ms)
Jan 24 00:57:28.412: INFO: (2) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
apt/
... (200; 4.805507ms)
Jan 24 00:57:28.417: INFO: (3) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
apt/
... (200; 4.957868ms)
Jan 24 00:57:28.423: INFO: (4) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
apt/
... (200; 6.008629ms)
Jan 24 00:57:28.426: INFO: (5) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
apt/
... (200; 3.172471ms)
Jan 24 00:57:28.430: INFO: (6) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
apt/
... (200; 3.992236ms)
Jan 24 00:57:28.434: INFO: (7) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
apt/
... (200; 3.49187ms)
Jan 24 00:57:28.438: INFO: (8) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
apt/
... (200; 4.044409ms)
Jan 24 00:57:28.444: INFO: (9) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
apt/
... (200; 6.175264ms)
Jan 24 00:57:28.449: INFO: (10) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
apt/
... (200; 4.213563ms)
Jan 24 00:57:28.452: INFO: (11) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
apt/
... (200; 3.130363ms)
Jan 24 00:57:28.456: INFO: (12) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
apt/
... (200; 3.931624ms)
Jan 24 00:57:28.460: INFO: (13) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
apt/
... (200; 4.492475ms)
Jan 24 00:57:28.464: INFO: (14) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
apt/
... (200; 3.462868ms)
Jan 24 00:57:28.467: INFO: (15) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
apt/
... (200; 3.66814ms)
Jan 24 00:57:28.472: INFO: (16) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
apt/
... (200; 4.202093ms)
Jan 24 00:57:28.475: INFO: (17) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
apt/
... (200; 3.715512ms)
Jan 24 00:57:28.480: INFO: (18) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
apt/
... (200; 4.834049ms)
Jan 24 00:57:28.484: INFO: (19) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
apt/
... (200; 3.704216ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 24 00:57:28.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-8107" for this suite.
•{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource  [Conformance]","total":278,"completed":205,"skipped":3185,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 24 00:57:28.493: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jan 24 00:57:38.317: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 24 00:57:38.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-3473" for this suite.

• [SLOW TEST:9.882 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":206,"skipped":3214,"failed":0}
SSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with best effort scope. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 24 00:57:38.376: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify ResourceQuota with best effort scope. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a ResourceQuota with best effort scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a ResourceQuota with not best effort scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a best-effort pod
STEP: Ensuring resource quota with best effort scope captures the pod usage
STEP: Ensuring resource quota with not best effort ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
STEP: Creating a not best-effort pod
STEP: Ensuring resource quota with not best effort scope captures the pod usage
STEP: Ensuring resource quota with best effort scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 24 00:57:54.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-5223" for this suite.

• [SLOW TEST:16.513 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with best effort scope. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":278,"completed":207,"skipped":3218,"failed":0}
SSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 24 00:57:54.889: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir volume type on node default medium
Jan 24 00:57:55.128: INFO: Waiting up to 5m0s for pod "pod-717f1e8f-9462-43cd-9a4b-3e62494e4038" in namespace "emptydir-5693" to be "success or failure"
Jan 24 00:57:55.147: INFO: Pod "pod-717f1e8f-9462-43cd-9a4b-3e62494e4038": Phase="Pending", Reason="", readiness=false. Elapsed: 18.471105ms
Jan 24 00:57:57.151: INFO: Pod "pod-717f1e8f-9462-43cd-9a4b-3e62494e4038": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022434485s
Jan 24 00:57:59.156: INFO: Pod "pod-717f1e8f-9462-43cd-9a4b-3e62494e4038": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027595108s
Jan 24 00:58:01.163: INFO: Pod "pod-717f1e8f-9462-43cd-9a4b-3e62494e4038": Phase="Pending", Reason="", readiness=false. Elapsed: 6.034966018s
Jan 24 00:58:03.168: INFO: Pod "pod-717f1e8f-9462-43cd-9a4b-3e62494e4038": Phase="Pending", Reason="", readiness=false. Elapsed: 8.039394128s
Jan 24 00:58:05.174: INFO: Pod "pod-717f1e8f-9462-43cd-9a4b-3e62494e4038": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.04545479s
STEP: Saw pod success
Jan 24 00:58:05.174: INFO: Pod "pod-717f1e8f-9462-43cd-9a4b-3e62494e4038" satisfied condition "success or failure"
Jan 24 00:58:05.178: INFO: Trying to get logs from node jerma-node pod pod-717f1e8f-9462-43cd-9a4b-3e62494e4038 container test-container: 
STEP: delete the pod
Jan 24 00:58:05.343: INFO: Waiting for pod pod-717f1e8f-9462-43cd-9a4b-3e62494e4038 to disappear
Jan 24 00:58:05.395: INFO: Pod pod-717f1e8f-9462-43cd-9a4b-3e62494e4038 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 24 00:58:05.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5693" for this suite.

• [SLOW TEST:10.521 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":208,"skipped":3223,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 24 00:58:05.412: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 24 00:58:05.566: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 24 00:58:13.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5963" for this suite.

• [SLOW TEST:8.521 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":278,"completed":209,"skipped":3250,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 24 00:58:13.935: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating pod test-webserver-49d0cd8a-881f-4a99-bbe1-8ed293b61664 in namespace container-probe-8488
Jan 24 00:58:20.161: INFO: Started pod test-webserver-49d0cd8a-881f-4a99-bbe1-8ed293b61664 in namespace container-probe-8488
STEP: checking the pod's current state and verifying that restartCount is present
Jan 24 00:58:20.165: INFO: Initial restart count of pod test-webserver-49d0cd8a-881f-4a99-bbe1-8ed293b61664 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 24 01:02:21.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-8488" for this suite.

• [SLOW TEST:248.053 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":210,"skipped":3317,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 24 01:02:21.989: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating secret with name secret-test-64d94fc3-d296-4e3b-9b69-f1f02d4df7b3
STEP: Creating a pod to test consume secrets
Jan 24 01:02:22.302: INFO: Waiting up to 5m0s for pod "pod-secrets-d6507901-46f1-4ae2-b129-84c553aa4b5b" in namespace "secrets-7907" to be "success or failure"
Jan 24 01:02:22.354: INFO: Pod "pod-secrets-d6507901-46f1-4ae2-b129-84c553aa4b5b": Phase="Pending", Reason="", readiness=false. Elapsed: 51.657652ms
Jan 24 01:02:24.360: INFO: Pod "pod-secrets-d6507901-46f1-4ae2-b129-84c553aa4b5b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057435934s
Jan 24 01:02:26.367: INFO: Pod "pod-secrets-d6507901-46f1-4ae2-b129-84c553aa4b5b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.064396637s
Jan 24 01:02:28.373: INFO: Pod "pod-secrets-d6507901-46f1-4ae2-b129-84c553aa4b5b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.070583321s
Jan 24 01:02:30.378: INFO: Pod "pod-secrets-d6507901-46f1-4ae2-b129-84c553aa4b5b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.076029851s
Jan 24 01:02:32.383: INFO: Pod "pod-secrets-d6507901-46f1-4ae2-b129-84c553aa4b5b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.08046929s
STEP: Saw pod success
Jan 24 01:02:32.383: INFO: Pod "pod-secrets-d6507901-46f1-4ae2-b129-84c553aa4b5b" satisfied condition "success or failure"
Jan 24 01:02:32.386: INFO: Trying to get logs from node jerma-node pod pod-secrets-d6507901-46f1-4ae2-b129-84c553aa4b5b container secret-volume-test: 
STEP: delete the pod
Jan 24 01:02:32.538: INFO: Waiting for pod pod-secrets-d6507901-46f1-4ae2-b129-84c553aa4b5b to disappear
Jan 24 01:02:32.551: INFO: Pod pod-secrets-d6507901-46f1-4ae2-b129-84c553aa4b5b no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 24 01:02:32.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7907" for this suite.
STEP: Destroying namespace "secret-namespace-9723" for this suite.

• [SLOW TEST:10.606 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":278,"completed":211,"skipped":3336,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should be able to update and delete ResourceQuota. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 24 01:02:32.596: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to update and delete ResourceQuota. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a ResourceQuota
STEP: Getting a ResourceQuota
STEP: Updating a ResourceQuota
STEP: Verifying a ResourceQuota was modified
STEP: Deleting a ResourceQuota
STEP: Verifying the deleted ResourceQuota
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 24 01:02:32.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-3572" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":278,"completed":212,"skipped":3355,"failed":0}

------------------------------
[sig-cli] Kubectl client Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 24 01:02:32.920: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:279
[BeforeEach] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1789
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Jan 24 01:02:33.005: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-3474'
Jan 24 01:02:34.980: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 24 01:02:34.980: INFO: stdout: "job.batch/e2e-test-httpd-job created\n"
STEP: verifying the job e2e-test-httpd-job was created
[AfterEach] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1794
Jan 24 01:02:35.139: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-httpd-job --namespace=kubectl-3474'
Jan 24 01:02:35.314: INFO: stderr: ""
Jan 24 01:02:35.315: INFO: stdout: "job.batch \"e2e-test-httpd-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 24 01:02:35.315: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3474" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure  [Conformance]","total":278,"completed":213,"skipped":3355,"failed":0}
SSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 24 01:02:35.334: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:687
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating service multi-endpoint-test in namespace services-4126
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4126 to expose endpoints map[]
Jan 24 01:02:36.062: INFO: successfully validated that service multi-endpoint-test in namespace services-4126 exposes endpoints map[] (14.079213ms elapsed)
STEP: Creating pod pod1 in namespace services-4126
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4126 to expose endpoints map[pod1:[100]]
Jan 24 01:02:40.176: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.095401028s elapsed, will retry)
Jan 24 01:02:45.250: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (9.169316958s elapsed, will retry)
Jan 24 01:02:46.284: INFO: successfully validated that service multi-endpoint-test in namespace services-4126 exposes endpoints map[pod1:[100]] (10.204061396s elapsed)
STEP: Creating pod pod2 in namespace services-4126
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4126 to expose endpoints map[pod1:[100] pod2:[101]]
Jan 24 01:02:51.384: INFO: Unexpected endpoints: found map[b6bbe2c2-a36e-47cb-81cf-ac28ee2dbd92:[100]], expected map[pod1:[100] pod2:[101]] (5.096686767s elapsed, will retry)
Jan 24 01:02:54.552: INFO: successfully validated that service multi-endpoint-test in namespace services-4126 exposes endpoints map[pod1:[100] pod2:[101]] (8.264513627s elapsed)
STEP: Deleting pod pod1 in namespace services-4126
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4126 to expose endpoints map[pod2:[101]]
Jan 24 01:02:55.599: INFO: successfully validated that service multi-endpoint-test in namespace services-4126 exposes endpoints map[pod2:[101]] (1.043119923s elapsed)
STEP: Deleting pod pod2 in namespace services-4126
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4126 to expose endpoints map[]
Jan 24 01:02:57.053: INFO: successfully validated that service multi-endpoint-test in namespace services-4126 exposes endpoints map[] (1.445987147s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 24 01:02:57.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-4126" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691

• [SLOW TEST:22.404 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods  [Conformance]","total":278,"completed":214,"skipped":3360,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 24 01:02:57.738: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Jan 24 01:02:58.131: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8daf7dfb-ddad-48bd-bb70-27d1b04bb7d0" in namespace "downward-api-1896" to be "success or failure"
Jan 24 01:02:58.179: INFO: Pod "downwardapi-volume-8daf7dfb-ddad-48bd-bb70-27d1b04bb7d0": Phase="Pending", Reason="", readiness=false. Elapsed: 47.668479ms
Jan 24 01:03:00.218: INFO: Pod "downwardapi-volume-8daf7dfb-ddad-48bd-bb70-27d1b04bb7d0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.086103608s
Jan 24 01:03:02.224: INFO: Pod "downwardapi-volume-8daf7dfb-ddad-48bd-bb70-27d1b04bb7d0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.092113321s
Jan 24 01:03:04.228: INFO: Pod "downwardapi-volume-8daf7dfb-ddad-48bd-bb70-27d1b04bb7d0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.096416398s
Jan 24 01:03:06.233: INFO: Pod "downwardapi-volume-8daf7dfb-ddad-48bd-bb70-27d1b04bb7d0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.101854958s
Jan 24 01:03:08.241: INFO: Pod "downwardapi-volume-8daf7dfb-ddad-48bd-bb70-27d1b04bb7d0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.109100937s
STEP: Saw pod success
Jan 24 01:03:08.241: INFO: Pod "downwardapi-volume-8daf7dfb-ddad-48bd-bb70-27d1b04bb7d0" satisfied condition "success or failure"
Jan 24 01:03:08.246: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-8daf7dfb-ddad-48bd-bb70-27d1b04bb7d0 container client-container: 
STEP: delete the pod
Jan 24 01:03:08.322: INFO: Waiting for pod downwardapi-volume-8daf7dfb-ddad-48bd-bb70-27d1b04bb7d0 to disappear
Jan 24 01:03:08.334: INFO: Pod downwardapi-volume-8daf7dfb-ddad-48bd-bb70-27d1b04bb7d0 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 24 01:03:08.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1896" for this suite.

• [SLOW TEST:10.724 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":215,"skipped":3368,"failed":0}
SSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 24 01:03:08.463: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jan 24 01:03:17.258: INFO: Successfully updated pod "pod-update-4ca611b6-3b6a-4704-a598-143af411827b"
STEP: verifying the updated pod is in kubernetes
Jan 24 01:03:17.283: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 24 01:03:17.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3561" for this suite.

• [SLOW TEST:8.975 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":278,"completed":216,"skipped":3378,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 24 01:03:17.439: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: getting the auto-created API token
Jan 24 01:03:18.020: INFO: created pod pod-service-account-defaultsa
Jan 24 01:03:18.020: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Jan 24 01:03:18.075: INFO: created pod pod-service-account-mountsa
Jan 24 01:03:18.075: INFO: pod pod-service-account-mountsa service account token volume mount: true
Jan 24 01:03:18.086: INFO: created pod pod-service-account-nomountsa
Jan 24 01:03:18.086: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Jan 24 01:03:18.117: INFO: created pod pod-service-account-defaultsa-mountspec
Jan 24 01:03:18.117: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Jan 24 01:03:18.130: INFO: created pod pod-service-account-mountsa-mountspec
Jan 24 01:03:18.130: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Jan 24 01:03:18.379: INFO: created pod pod-service-account-nomountsa-mountspec
Jan 24 01:03:18.380: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Jan 24 01:03:18.409: INFO: created pod pod-service-account-defaultsa-nomountspec
Jan 24 01:03:18.409: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Jan 24 01:03:18.446: INFO: created pod pod-service-account-mountsa-nomountspec
Jan 24 01:03:18.447: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Jan 24 01:03:18.477: INFO: created pod pod-service-account-nomountsa-nomountspec
Jan 24 01:03:18.477: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 24 01:03:18.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-8417" for this suite.
•{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","total":278,"completed":217,"skipped":3401,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 24 01:03:20.189: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 24 01:03:50.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-2970" for this suite.

• [SLOW TEST:30.392 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":278,"completed":218,"skipped":3419,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with pruning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 24 01:03:50.582: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 24 01:03:51.685: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 24 01:03:53.707: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715424631, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715424631, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715424631, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715424631, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 24 01:03:55.738: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715424631, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715424631, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715424631, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715424631, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 24 01:03:58.042: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715424631, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715424631, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715424631, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715424631, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 24 01:03:59.934: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715424631, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715424631, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715424631, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715424631, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 24 01:04:02.751: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource with pruning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 24 01:04:02.758: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-7356-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource that should be mutated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 24 01:04:04.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4882" for this suite.
STEP: Destroying namespace "webhook-4882-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:13.782 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with pruning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":278,"completed":219,"skipped":3437,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch 
  watch on custom resource definition objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 24 01:04:04.365: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] watch on custom resource definition objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 24 01:04:04.493: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating first CR 
Jan 24 01:04:04.670: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-01-24T01:04:04Z generation:1 name:name1 resourceVersion:3925800 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:f3d6697a-6bbe-4cd0-8500-68c55b224483] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Creating second CR
Jan 24 01:04:14.677: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-01-24T01:04:14Z generation:1 name:name2 resourceVersion:3925842 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:122cb7fb-2bfe-4a05-a6b0-1d9a4eaef050] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Modifying first CR
Jan 24 01:04:24.688: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-01-24T01:04:04Z generation:2 name:name1 resourceVersion:3925868 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:f3d6697a-6bbe-4cd0-8500-68c55b224483] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Modifying second CR
Jan 24 01:04:34.697: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-01-24T01:04:14Z generation:2 name:name2 resourceVersion:3925889 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:122cb7fb-2bfe-4a05-a6b0-1d9a4eaef050] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Deleting first CR
Jan 24 01:04:44.706: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-01-24T01:04:04Z generation:2 name:name1 resourceVersion:3925913 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:f3d6697a-6bbe-4cd0-8500-68c55b224483] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Deleting second CR
Jan 24 01:04:54.719: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-01-24T01:04:14Z generation:2 name:name2 resourceVersion:3925937 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:122cb7fb-2bfe-4a05-a6b0-1d9a4eaef050] num:map[num1:9223372036854775807 num2:1000000]]}
[AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 24 01:05:05.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-watch-5538" for this suite.

• [SLOW TEST:60.914 seconds]
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  CustomResourceDefinition Watch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:41
    watch on custom resource definition objects [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":278,"completed":220,"skipped":3462,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 24 01:05:05.279: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 24 01:05:05.456: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 24 01:05:13.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-8177" for this suite.

• [SLOW TEST:8.274 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":278,"completed":221,"skipped":3485,"failed":0}
SSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 24 01:05:13.554: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test override arguments
Jan 24 01:05:13.692: INFO: Waiting up to 5m0s for pod "client-containers-65413b73-723f-4010-ba24-7bb8c74cbcf3" in namespace "containers-6342" to be "success or failure"
Jan 24 01:05:13.710: INFO: Pod "client-containers-65413b73-723f-4010-ba24-7bb8c74cbcf3": Phase="Pending", Reason="", readiness=false. Elapsed: 18.282973ms
Jan 24 01:05:15.715: INFO: Pod "client-containers-65413b73-723f-4010-ba24-7bb8c74cbcf3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023369422s
Jan 24 01:05:17.734: INFO: Pod "client-containers-65413b73-723f-4010-ba24-7bb8c74cbcf3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041716143s
Jan 24 01:05:19.737: INFO: Pod "client-containers-65413b73-723f-4010-ba24-7bb8c74cbcf3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.045494297s
Jan 24 01:05:21.743: INFO: Pod "client-containers-65413b73-723f-4010-ba24-7bb8c74cbcf3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.051180512s
STEP: Saw pod success
Jan 24 01:05:21.743: INFO: Pod "client-containers-65413b73-723f-4010-ba24-7bb8c74cbcf3" satisfied condition "success or failure"
Jan 24 01:05:21.747: INFO: Trying to get logs from node jerma-node pod client-containers-65413b73-723f-4010-ba24-7bb8c74cbcf3 container test-container: 
STEP: delete the pod
Jan 24 01:05:21.791: INFO: Waiting for pod client-containers-65413b73-723f-4010-ba24-7bb8c74cbcf3 to disappear
Jan 24 01:05:21.904: INFO: Pod client-containers-65413b73-723f-4010-ba24-7bb8c74cbcf3 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 24 01:05:21.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-6342" for this suite.

• [SLOW TEST:8.362 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":278,"completed":222,"skipped":3496,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 24 01:05:21.917: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ConfigMap
STEP: Ensuring resource quota status captures configMap creation
STEP: Deleting a ConfigMap
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 24 01:05:38.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-798" for this suite.

• [SLOW TEST:16.261 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":278,"completed":223,"skipped":3516,"failed":0}
SSSSSSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 24 01:05:38.178: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 24 01:05:46.418: INFO: Waiting up to 5m0s for pod "client-envvars-a0730949-43b3-4b4e-a22f-d61aec71a125" in namespace "pods-6839" to be "success or failure"
Jan 24 01:05:46.437: INFO: Pod "client-envvars-a0730949-43b3-4b4e-a22f-d61aec71a125": Phase="Pending", Reason="", readiness=false. Elapsed: 18.803282ms
Jan 24 01:05:48.446: INFO: Pod "client-envvars-a0730949-43b3-4b4e-a22f-d61aec71a125": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027883223s
Jan 24 01:05:50.454: INFO: Pod "client-envvars-a0730949-43b3-4b4e-a22f-d61aec71a125": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035804488s
Jan 24 01:05:52.466: INFO: Pod "client-envvars-a0730949-43b3-4b4e-a22f-d61aec71a125": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047808999s
Jan 24 01:05:54.472: INFO: Pod "client-envvars-a0730949-43b3-4b4e-a22f-d61aec71a125": Phase="Pending", Reason="", readiness=false. Elapsed: 8.054342127s
Jan 24 01:05:56.477: INFO: Pod "client-envvars-a0730949-43b3-4b4e-a22f-d61aec71a125": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.058743717s
STEP: Saw pod success
Jan 24 01:05:56.477: INFO: Pod "client-envvars-a0730949-43b3-4b4e-a22f-d61aec71a125" satisfied condition "success or failure"
Jan 24 01:05:56.480: INFO: Trying to get logs from node jerma-node pod client-envvars-a0730949-43b3-4b4e-a22f-d61aec71a125 container env3cont: 
STEP: delete the pod
Jan 24 01:05:56.625: INFO: Waiting for pod client-envvars-a0730949-43b3-4b4e-a22f-d61aec71a125 to disappear
Jan 24 01:05:56.645: INFO: Pod client-envvars-a0730949-43b3-4b4e-a22f-d61aec71a125 no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 24 01:05:56.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-6839" for this suite.

• [SLOW TEST:18.482 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":278,"completed":224,"skipped":3525,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 24 01:05:56.661: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating pod liveness-472fe0eb-51e3-4f86-bbcf-a6163c7a16f9 in namespace container-probe-6978
Jan 24 01:06:06.845: INFO: Started pod liveness-472fe0eb-51e3-4f86-bbcf-a6163c7a16f9 in namespace container-probe-6978
STEP: checking the pod's current state and verifying that restartCount is present
Jan 24 01:06:06.849: INFO: Initial restart count of pod liveness-472fe0eb-51e3-4f86-bbcf-a6163c7a16f9 is 0
Jan 24 01:06:20.906: INFO: Restart count of pod container-probe-6978/liveness-472fe0eb-51e3-4f86-bbcf-a6163c7a16f9 is now 1 (14.056946484s elapsed)
Jan 24 01:06:42.983: INFO: Restart count of pod container-probe-6978/liveness-472fe0eb-51e3-4f86-bbcf-a6163c7a16f9 is now 2 (36.133987618s elapsed)
Jan 24 01:07:03.096: INFO: Restart count of pod container-probe-6978/liveness-472fe0eb-51e3-4f86-bbcf-a6163c7a16f9 is now 3 (56.247275091s elapsed)
Jan 24 01:07:21.302: INFO: Restart count of pod container-probe-6978/liveness-472fe0eb-51e3-4f86-bbcf-a6163c7a16f9 is now 4 (1m14.452891348s elapsed)
Jan 24 01:08:25.585: INFO: Restart count of pod container-probe-6978/liveness-472fe0eb-51e3-4f86-bbcf-a6163c7a16f9 is now 5 (2m18.735904847s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 24 01:08:25.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-6978" for this suite.

• [SLOW TEST:148.978 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":278,"completed":225,"skipped":3566,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a mutating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 24 01:08:25.640: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 24 01:08:26.623: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 24 01:08:28.652: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715424906, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715424906, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715424906, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715424906, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 24 01:08:30.663: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715424906, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715424906, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715424906, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715424906, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 24 01:08:32.657: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715424906, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715424906, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715424906, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715424906, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 24 01:08:34.659: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715424906, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715424906, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715424906, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715424906, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 24 01:08:37.739: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] patching/updating a mutating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a mutating webhook configuration
STEP: Updating a mutating webhook configuration's rules to not include the create operation
STEP: Creating a configMap that should not be mutated
STEP: Patching a mutating webhook configuration's rules to include the create operation
STEP: Creating a configMap that should be mutated
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 24 01:08:37.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-2136" for this suite.
STEP: Destroying namespace "webhook-2136-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:12.499 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a mutating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":278,"completed":226,"skipped":3587,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert a non homogeneous list of CRs [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 24 01:08:38.140: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125
STEP: Setting up server cert
STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication
STEP: Deploying the custom resource conversion webhook pod
STEP: Wait for the deployment to be ready
Jan 24 01:08:39.039: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set
Jan 24 01:08:41.058: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715424919, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715424919, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715424919, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715424919, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 24 01:08:43.063: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715424919, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715424919, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715424919, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715424919, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 24 01:08:45.066: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715424919, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715424919, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715424919, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715424919, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 24 01:08:47.066: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715424919, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715424919, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715424919, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715424919, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 24 01:08:50.240: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1
[It] should be able to convert a non homogeneous list of CRs [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 24 01:08:50.258: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating a v1 custom resource
STEP: Create a v2 custom resource
STEP: List CRs in v1
STEP: List CRs in v2
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 24 01:08:51.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-4101" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136

• [SLOW TEST:13.276 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert a non homogeneous list of CRs [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":278,"completed":227,"skipped":3592,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 24 01:08:51.416: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating secret with name secret-test-881798fd-45da-4744-b8c2-4406b9677308
STEP: Creating a pod to test consume secrets
Jan 24 01:08:51.566: INFO: Waiting up to 5m0s for pod "pod-secrets-b6abaf38-c1cb-473b-a432-4b7c8e58222a" in namespace "secrets-4987" to be "success or failure"
Jan 24 01:08:51.770: INFO: Pod "pod-secrets-b6abaf38-c1cb-473b-a432-4b7c8e58222a": Phase="Pending", Reason="", readiness=false. Elapsed: 203.998899ms
Jan 24 01:08:53.788: INFO: Pod "pod-secrets-b6abaf38-c1cb-473b-a432-4b7c8e58222a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.221487862s
Jan 24 01:08:55.795: INFO: Pod "pod-secrets-b6abaf38-c1cb-473b-a432-4b7c8e58222a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.228330107s
Jan 24 01:08:57.805: INFO: Pod "pod-secrets-b6abaf38-c1cb-473b-a432-4b7c8e58222a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.238323725s
Jan 24 01:08:59.817: INFO: Pod "pod-secrets-b6abaf38-c1cb-473b-a432-4b7c8e58222a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.250662112s
Jan 24 01:09:01.827: INFO: Pod "pod-secrets-b6abaf38-c1cb-473b-a432-4b7c8e58222a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.261066663s
STEP: Saw pod success
Jan 24 01:09:01.828: INFO: Pod "pod-secrets-b6abaf38-c1cb-473b-a432-4b7c8e58222a" satisfied condition "success or failure"
Jan 24 01:09:01.850: INFO: Trying to get logs from node jerma-node pod pod-secrets-b6abaf38-c1cb-473b-a432-4b7c8e58222a container secret-volume-test: 
STEP: delete the pod
Jan 24 01:09:02.064: INFO: Waiting for pod pod-secrets-b6abaf38-c1cb-473b-a432-4b7c8e58222a to disappear
Jan 24 01:09:02.079: INFO: Pod pod-secrets-b6abaf38-c1cb-473b-a432-4b7c8e58222a no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 24 01:09:02.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4987" for this suite.

• [SLOW TEST:10.672 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":228,"skipped":3613,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 24 01:09:02.089: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:150
[It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 24 01:09:02.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-6379" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":278,"completed":229,"skipped":3644,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 24 01:09:02.404: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 24 01:09:02.538: INFO: (0) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 23.871523ms)
Jan 24 01:09:02.546: INFO: (1) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 7.220777ms)
Jan 24 01:09:02.567: INFO: (2) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 21.248436ms)
Jan 24 01:09:02.599: INFO: (3) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 31.531169ms)
Jan 24 01:09:02.603: INFO: (4) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 4.46887ms)
Jan 24 01:09:02.608: INFO: (5) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 4.322766ms)
Jan 24 01:09:02.614: INFO: (6) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 6.376022ms)
Jan 24 01:09:02.617: INFO: (7) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 3.232412ms)
Jan 24 01:09:02.622: INFO: (8) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 4.12561ms)
Jan 24 01:09:02.625: INFO: (9) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 3.089076ms)
Jan 24 01:09:02.628: INFO: (10) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 3.689527ms)
Jan 24 01:09:02.631: INFO: (11) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 2.508252ms)
Jan 24 01:09:02.633: INFO: (12) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 2.337035ms)
Jan 24 01:09:02.637: INFO: (13) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 3.32109ms)
Jan 24 01:09:02.639: INFO: (14) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 2.546702ms)
Jan 24 01:09:02.642: INFO: (15) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 2.713575ms)
Jan 24 01:09:02.645: INFO: (16) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 3.180041ms)
Jan 24 01:09:02.648: INFO: (17) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 2.500944ms)
Jan 24 01:09:02.650: INFO: (18) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 2.480449ms)
Jan 24 01:09:02.656: INFO: (19) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 5.443342ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 24 01:09:02.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-7161" for this suite.
•{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]","total":278,"completed":230,"skipped":3664,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a container with runAsUser 
  should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 24 01:09:02.663: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39
[It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 24 01:09:02.786: INFO: Waiting up to 5m0s for pod "busybox-user-65534-c34e75a4-d037-4e42-98c3-88a679ac824b" in namespace "security-context-test-634" to be "success or failure"
Jan 24 01:09:02.804: INFO: Pod "busybox-user-65534-c34e75a4-d037-4e42-98c3-88a679ac824b": Phase="Pending", Reason="", readiness=false. Elapsed: 18.584208ms
Jan 24 01:09:04.811: INFO: Pod "busybox-user-65534-c34e75a4-d037-4e42-98c3-88a679ac824b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025399576s
Jan 24 01:09:06.817: INFO: Pod "busybox-user-65534-c34e75a4-d037-4e42-98c3-88a679ac824b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031739346s
Jan 24 01:09:08.827: INFO: Pod "busybox-user-65534-c34e75a4-d037-4e42-98c3-88a679ac824b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041518742s
Jan 24 01:09:10.834: INFO: Pod "busybox-user-65534-c34e75a4-d037-4e42-98c3-88a679ac824b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.048008169s
Jan 24 01:09:12.842: INFO: Pod "busybox-user-65534-c34e75a4-d037-4e42-98c3-88a679ac824b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.056714536s
Jan 24 01:09:14.853: INFO: Pod "busybox-user-65534-c34e75a4-d037-4e42-98c3-88a679ac824b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.067386713s
Jan 24 01:09:14.853: INFO: Pod "busybox-user-65534-c34e75a4-d037-4e42-98c3-88a679ac824b" satisfied condition "success or failure"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 24 01:09:14.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-634" for this suite.

• [SLOW TEST:12.207 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  When creating a container with runAsUser
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:43
    should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":231,"skipped":3694,"failed":0}
SS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 24 01:09:14.870: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating the pod
Jan 24 01:09:14.994: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 24 01:09:28.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-4792" for this suite.

• [SLOW TEST:13.726 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":278,"completed":232,"skipped":3696,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 24 01:09:28.597: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name configmap-test-upd-a54a9f28-c14b-4396-9623-5b5dd45e3646
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 24 01:09:38.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8994" for this suite.

• [SLOW TEST:10.200 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":233,"skipped":3704,"failed":0}
SSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 24 01:09:38.797: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation
Jan 24 01:09:38.890: INFO: >>> kubeConfig: /root/.kube/config
Jan 24 01:09:41.835: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 24 01:09:53.869: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-894" for this suite.

• [SLOW TEST:15.102 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":278,"completed":234,"skipped":3708,"failed":0}
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 24 01:09:53.900: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Jan 24 01:10:03.071: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 24 01:10:03.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-9226" for this suite.

• [SLOW TEST:9.691 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":278,"completed":235,"skipped":3708,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 24 01:10:03.593: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name projected-configmap-test-volume-33321dad-d7f7-418a-8064-ca8257397c57
STEP: Creating a pod to test consume configMaps
Jan 24 01:10:03.792: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0cbcc811-e812-46e4-a3c7-5558a8a1326b" in namespace "projected-9346" to be "success or failure"
Jan 24 01:10:03.873: INFO: Pod "pod-projected-configmaps-0cbcc811-e812-46e4-a3c7-5558a8a1326b": Phase="Pending", Reason="", readiness=false. Elapsed: 80.918918ms
Jan 24 01:10:05.887: INFO: Pod "pod-projected-configmaps-0cbcc811-e812-46e4-a3c7-5558a8a1326b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.095170132s
Jan 24 01:10:07.892: INFO: Pod "pod-projected-configmaps-0cbcc811-e812-46e4-a3c7-5558a8a1326b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.099948602s
Jan 24 01:10:09.900: INFO: Pod "pod-projected-configmaps-0cbcc811-e812-46e4-a3c7-5558a8a1326b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.108354559s
Jan 24 01:10:11.906: INFO: Pod "pod-projected-configmaps-0cbcc811-e812-46e4-a3c7-5558a8a1326b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.11389583s
Jan 24 01:10:13.975: INFO: Pod "pod-projected-configmaps-0cbcc811-e812-46e4-a3c7-5558a8a1326b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.183124807s
Jan 24 01:10:15.980: INFO: Pod "pod-projected-configmaps-0cbcc811-e812-46e4-a3c7-5558a8a1326b": Phase="Pending", Reason="", readiness=false. Elapsed: 12.188680174s
Jan 24 01:10:18.023: INFO: Pod "pod-projected-configmaps-0cbcc811-e812-46e4-a3c7-5558a8a1326b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.230843102s
STEP: Saw pod success
Jan 24 01:10:18.023: INFO: Pod "pod-projected-configmaps-0cbcc811-e812-46e4-a3c7-5558a8a1326b" satisfied condition "success or failure"
Jan 24 01:10:18.092: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-0cbcc811-e812-46e4-a3c7-5558a8a1326b container projected-configmap-volume-test: 
STEP: delete the pod
Jan 24 01:10:18.193: INFO: Waiting for pod pod-projected-configmaps-0cbcc811-e812-46e4-a3c7-5558a8a1326b to disappear
Jan 24 01:10:18.198: INFO: Pod pod-projected-configmaps-0cbcc811-e812-46e4-a3c7-5558a8a1326b no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 24 01:10:18.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9346" for this suite.

• [SLOW TEST:14.618 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":236,"skipped":3720,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD without validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 24 01:10:18.211: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD without validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 24 01:10:18.310: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Jan 24 01:10:21.347: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8722 create -f -'
Jan 24 01:10:24.249: INFO: stderr: ""
Jan 24 01:10:24.249: INFO: stdout: "e2e-test-crd-publish-openapi-6776-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n"
Jan 24 01:10:24.249: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8722 delete e2e-test-crd-publish-openapi-6776-crds test-cr'
Jan 24 01:10:24.405: INFO: stderr: ""
Jan 24 01:10:24.405: INFO: stdout: "e2e-test-crd-publish-openapi-6776-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n"
Jan 24 01:10:24.405: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8722 apply -f -'
Jan 24 01:10:24.665: INFO: stderr: ""
Jan 24 01:10:24.665: INFO: stdout: "e2e-test-crd-publish-openapi-6776-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n"
Jan 24 01:10:24.665: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8722 delete e2e-test-crd-publish-openapi-6776-crds test-cr'
Jan 24 01:10:24.766: INFO: stderr: ""
Jan 24 01:10:24.767: INFO: stdout: "e2e-test-crd-publish-openapi-6776-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR without validation schema
Jan 24 01:10:24.767: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6776-crds'
Jan 24 01:10:25.035: INFO: stderr: ""
Jan 24 01:10:25.035: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-6776-crd\nVERSION:  crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n     \n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 24 01:10:27.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-8722" for this suite.

• [SLOW TEST:9.794 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD without validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":278,"completed":237,"skipped":3747,"failed":0}
SSSS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert from CR v1 to CR v2 [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 24 01:10:28.005: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125
STEP: Setting up server cert
STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication
STEP: Deploying the custom resource conversion webhook pod
STEP: Wait for the deployment to be ready
Jan 24 01:10:28.691: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set
Jan 24 01:10:30.703: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715425028, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715425028, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715425028, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715425028, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 24 01:10:32.707: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715425028, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715425028, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715425028, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715425028, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 24 01:10:34.711: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715425028, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715425028, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715425028, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715425028, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 24 01:10:37.748: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1
[It] should be able to convert from CR v1 to CR v2 [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 24 01:10:37.754: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating a v1 custom resource
STEP: v2 custom resource should be converted
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 24 01:10:39.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-6426" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136

• [SLOW TEST:11.207 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert from CR v1 to CR v2 [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":278,"completed":238,"skipped":3751,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 24 01:10:39.213: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Jan 24 01:10:39.339: INFO: Waiting up to 5m0s for pod "downwardapi-volume-af869a79-0d77-43ff-a542-1e177104241d" in namespace "projected-3290" to be "success or failure"
Jan 24 01:10:39.456: INFO: Pod "downwardapi-volume-af869a79-0d77-43ff-a542-1e177104241d": Phase="Pending", Reason="", readiness=false. Elapsed: 117.490906ms
Jan 24 01:10:41.464: INFO: Pod "downwardapi-volume-af869a79-0d77-43ff-a542-1e177104241d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.125489258s
Jan 24 01:10:43.472: INFO: Pod "downwardapi-volume-af869a79-0d77-43ff-a542-1e177104241d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.133325271s
Jan 24 01:10:45.480: INFO: Pod "downwardapi-volume-af869a79-0d77-43ff-a542-1e177104241d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.141531171s
Jan 24 01:10:47.514: INFO: Pod "downwardapi-volume-af869a79-0d77-43ff-a542-1e177104241d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.175499367s
Jan 24 01:10:49.525: INFO: Pod "downwardapi-volume-af869a79-0d77-43ff-a542-1e177104241d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.185745857s
STEP: Saw pod success
Jan 24 01:10:49.525: INFO: Pod "downwardapi-volume-af869a79-0d77-43ff-a542-1e177104241d" satisfied condition "success or failure"
Jan 24 01:10:49.530: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-af869a79-0d77-43ff-a542-1e177104241d container client-container: 
STEP: delete the pod
Jan 24 01:10:49.604: INFO: Waiting for pod downwardapi-volume-af869a79-0d77-43ff-a542-1e177104241d to disappear
Jan 24 01:10:49.615: INFO: Pod downwardapi-volume-af869a79-0d77-43ff-a542-1e177104241d no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 24 01:10:49.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3290" for this suite.

• [SLOW TEST:10.423 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":239,"skipped":3765,"failed":0}
SSSSSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 24 01:10:49.636: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Jan 24 01:10:59.830: INFO: &Pod{ObjectMeta:{send-events-5e9dc83c-4888-43cb-b390-c10f024e401f  events-3161 /api/v1/namespaces/events-3161/pods/send-events-5e9dc83c-4888-43cb-b390-c10f024e401f 2c6c215c-998a-496b-b1cc-31b13dc1d3ac 3927443 0 2020-01-24 01:10:49 +0000 UTC   map[name:foo time:784064740] map[] [] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qzwdr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qzwdr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qzwdr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-24 01:10:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-24 01:10:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-24 01:10:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-24 01:10:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.1,StartTime:2020-01-24 01:10:51 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-24 01:10:57 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:docker://a7ba2c7382e3696b40ac1fd31e9ab6064d3eea4329d5a8ba7421583ce907194e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.1,},},EphemeralContainerStatuses:[]ContainerStatus{},},}

STEP: checking for scheduler event about the pod
Jan 24 01:11:01.839: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Jan 24 01:11:03.846: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 24 01:11:03.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-3161" for this suite.

• [SLOW TEST:14.256 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]","total":278,"completed":240,"skipped":3773,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 24 01:11:03.894: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:87
Jan 24 01:11:03.996: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan 24 01:11:04.006: INFO: Waiting for terminating namespaces to be deleted...
Jan 24 01:11:04.009: INFO: 
Logging pods the kubelet thinks is on node jerma-node before test
Jan 24 01:11:04.019: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded)
Jan 24 01:11:04.019: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 24 01:11:04.019: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded)
Jan 24 01:11:04.019: INFO: 	Container weave ready: true, restart count 1
Jan 24 01:11:04.019: INFO: 	Container weave-npc ready: true, restart count 0
Jan 24 01:11:04.019: INFO: send-events-5e9dc83c-4888-43cb-b390-c10f024e401f from events-3161 started at 2020-01-24 01:10:51 +0000 UTC (1 container statuses recorded)
Jan 24 01:11:04.019: INFO: 	Container p ready: true, restart count 0
Jan 24 01:11:04.019: INFO: 
Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test
Jan 24 01:11:04.036: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Jan 24 01:11:04.036: INFO: 	Container kube-controller-manager ready: true, restart count 3
Jan 24 01:11:04.036: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded)
Jan 24 01:11:04.036: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 24 01:11:04.036: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded)
Jan 24 01:11:04.036: INFO: 	Container weave ready: true, restart count 0
Jan 24 01:11:04.036: INFO: 	Container weave-npc ready: true, restart count 0
Jan 24 01:11:04.036: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Jan 24 01:11:04.036: INFO: 	Container kube-scheduler ready: true, restart count 3
Jan 24 01:11:04.036: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Jan 24 01:11:04.037: INFO: 	Container kube-apiserver ready: true, restart count 1
Jan 24 01:11:04.037: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Jan 24 01:11:04.037: INFO: 	Container etcd ready: true, restart count 1
Jan 24 01:11:04.037: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Jan 24 01:11:04.037: INFO: 	Container coredns ready: true, restart count 0
Jan 24 01:11:04.037: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Jan 24 01:11:04.037: INFO: 	Container coredns ready: true, restart count 0
[It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-339c9177-f125-478f-9e4c-eeb2d4cb0131 95
STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled
STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled
STEP: removing the label kubernetes.io/e2e-339c9177-f125-478f-9e4c-eeb2d4cb0131 off the node jerma-node
STEP: verifying the node doesn't have the label kubernetes.io/e2e-339c9177-f125-478f-9e4c-eeb2d4cb0131
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 24 01:16:18.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-8192" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:78

• [SLOW TEST:314.696 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:39
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":278,"completed":241,"skipped":3826,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 24 01:16:18.592: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating pod pod-subpath-test-configmap-whnl
STEP: Creating a pod to test atomic-volume-subpath
Jan 24 01:16:18.744: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-whnl" in namespace "subpath-1587" to be "success or failure"
Jan 24 01:16:18.760: INFO: Pod "pod-subpath-test-configmap-whnl": Phase="Pending", Reason="", readiness=false. Elapsed: 15.333882ms
Jan 24 01:16:20.766: INFO: Pod "pod-subpath-test-configmap-whnl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021381992s
Jan 24 01:16:22.773: INFO: Pod "pod-subpath-test-configmap-whnl": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028263952s
Jan 24 01:16:24.776: INFO: Pod "pod-subpath-test-configmap-whnl": Phase="Pending", Reason="", readiness=false. Elapsed: 6.031643289s
Jan 24 01:16:26.781: INFO: Pod "pod-subpath-test-configmap-whnl": Phase="Pending", Reason="", readiness=false. Elapsed: 8.036229131s
Jan 24 01:16:28.789: INFO: Pod "pod-subpath-test-configmap-whnl": Phase="Running", Reason="", readiness=true. Elapsed: 10.044159867s
Jan 24 01:16:30.828: INFO: Pod "pod-subpath-test-configmap-whnl": Phase="Running", Reason="", readiness=true. Elapsed: 12.083615445s
Jan 24 01:16:32.836: INFO: Pod "pod-subpath-test-configmap-whnl": Phase="Running", Reason="", readiness=true. Elapsed: 14.091235359s
Jan 24 01:16:34.841: INFO: Pod "pod-subpath-test-configmap-whnl": Phase="Running", Reason="", readiness=true. Elapsed: 16.096393377s
Jan 24 01:16:36.846: INFO: Pod "pod-subpath-test-configmap-whnl": Phase="Running", Reason="", readiness=true. Elapsed: 18.101439405s
Jan 24 01:16:38.856: INFO: Pod "pod-subpath-test-configmap-whnl": Phase="Running", Reason="", readiness=true. Elapsed: 20.11119264s
Jan 24 01:16:40.863: INFO: Pod "pod-subpath-test-configmap-whnl": Phase="Running", Reason="", readiness=true. Elapsed: 22.118071564s
Jan 24 01:16:42.873: INFO: Pod "pod-subpath-test-configmap-whnl": Phase="Running", Reason="", readiness=true. Elapsed: 24.128167969s
Jan 24 01:16:44.879: INFO: Pod "pod-subpath-test-configmap-whnl": Phase="Running", Reason="", readiness=true. Elapsed: 26.134313812s
Jan 24 01:16:46.887: INFO: Pod "pod-subpath-test-configmap-whnl": Phase="Running", Reason="", readiness=true. Elapsed: 28.142158968s
Jan 24 01:16:48.895: INFO: Pod "pod-subpath-test-configmap-whnl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.150405549s
STEP: Saw pod success
Jan 24 01:16:48.895: INFO: Pod "pod-subpath-test-configmap-whnl" satisfied condition "success or failure"
Jan 24 01:16:48.898: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-configmap-whnl container test-container-subpath-configmap-whnl: 
STEP: delete the pod
Jan 24 01:16:48.960: INFO: Waiting for pod pod-subpath-test-configmap-whnl to disappear
Jan 24 01:16:48.967: INFO: Pod pod-subpath-test-configmap-whnl no longer exists
STEP: Deleting pod pod-subpath-test-configmap-whnl
Jan 24 01:16:48.967: INFO: Deleting pod "pod-subpath-test-configmap-whnl" in namespace "subpath-1587"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 24 01:16:48.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-1587" for this suite.

• [SLOW TEST:30.386 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":278,"completed":242,"skipped":3850,"failed":0}
SSSS
------------------------------
[sig-cli] Kubectl client Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 24 01:16:48.978: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:279
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: validating api versions
Jan 24 01:16:49.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Jan 24 01:16:49.411: INFO: stderr: ""
Jan 24 01:16:49.411: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 24 01:16:49.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-675" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","total":278,"completed":243,"skipped":3854,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 24 01:16:49.427: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Jan 24 01:16:49.619: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a2130dad-ecb5-43a8-b392-28e763724e2c" in namespace "downward-api-9750" to be "success or failure"
Jan 24 01:16:49.633: INFO: Pod "downwardapi-volume-a2130dad-ecb5-43a8-b392-28e763724e2c": Phase="Pending", Reason="", readiness=false. Elapsed: 13.350612ms
Jan 24 01:16:51.704: INFO: Pod "downwardapi-volume-a2130dad-ecb5-43a8-b392-28e763724e2c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.084444091s
Jan 24 01:16:53.738: INFO: Pod "downwardapi-volume-a2130dad-ecb5-43a8-b392-28e763724e2c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.119011547s
Jan 24 01:16:55.746: INFO: Pod "downwardapi-volume-a2130dad-ecb5-43a8-b392-28e763724e2c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.126533879s
Jan 24 01:16:57.752: INFO: Pod "downwardapi-volume-a2130dad-ecb5-43a8-b392-28e763724e2c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.132792234s
Jan 24 01:16:59.759: INFO: Pod "downwardapi-volume-a2130dad-ecb5-43a8-b392-28e763724e2c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.139344568s
STEP: Saw pod success
Jan 24 01:16:59.759: INFO: Pod "downwardapi-volume-a2130dad-ecb5-43a8-b392-28e763724e2c" satisfied condition "success or failure"
Jan 24 01:16:59.766: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-a2130dad-ecb5-43a8-b392-28e763724e2c container client-container: 
STEP: delete the pod
Jan 24 01:16:59.811: INFO: Waiting for pod downwardapi-volume-a2130dad-ecb5-43a8-b392-28e763724e2c to disappear
Jan 24 01:16:59.832: INFO: Pod downwardapi-volume-a2130dad-ecb5-43a8-b392-28e763724e2c no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 24 01:16:59.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9750" for this suite.

• [SLOW TEST:10.469 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":244,"skipped":3876,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 24 01:16:59.897: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Jan 24 01:17:00.043: INFO: Waiting up to 5m0s for pod "downwardapi-volume-37a27b8e-7e98-42e4-9926-64b0d5d85c3b" in namespace "projected-513" to be "success or failure"
Jan 24 01:17:00.055: INFO: Pod "downwardapi-volume-37a27b8e-7e98-42e4-9926-64b0d5d85c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 12.622648ms
Jan 24 01:17:02.067: INFO: Pod "downwardapi-volume-37a27b8e-7e98-42e4-9926-64b0d5d85c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024272075s
Jan 24 01:17:04.074: INFO: Pod "downwardapi-volume-37a27b8e-7e98-42e4-9926-64b0d5d85c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030953563s
Jan 24 01:17:06.082: INFO: Pod "downwardapi-volume-37a27b8e-7e98-42e4-9926-64b0d5d85c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039044636s
Jan 24 01:17:08.090: INFO: Pod "downwardapi-volume-37a27b8e-7e98-42e4-9926-64b0d5d85c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.046882158s
Jan 24 01:17:10.097: INFO: Pod "downwardapi-volume-37a27b8e-7e98-42e4-9926-64b0d5d85c3b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.054396382s
STEP: Saw pod success
Jan 24 01:17:10.097: INFO: Pod "downwardapi-volume-37a27b8e-7e98-42e4-9926-64b0d5d85c3b" satisfied condition "success or failure"
Jan 24 01:17:10.101: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-37a27b8e-7e98-42e4-9926-64b0d5d85c3b container client-container: 
STEP: delete the pod
Jan 24 01:17:10.228: INFO: Waiting for pod downwardapi-volume-37a27b8e-7e98-42e4-9926-64b0d5d85c3b to disappear
Jan 24 01:17:10.239: INFO: Pod downwardapi-volume-37a27b8e-7e98-42e4-9926-64b0d5d85c3b no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 24 01:17:10.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-513" for this suite.

• [SLOW TEST:10.364 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":278,"completed":245,"skipped":3887,"failed":0}
SSSS
------------------------------
[sig-cli] Kubectl client Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 24 01:17:10.261: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:279
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating Agnhost RC
Jan 24 01:17:10.427: INFO: namespace kubectl-757
Jan 24 01:17:10.427: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-757'
Jan 24 01:17:10.868: INFO: stderr: ""
Jan 24 01:17:10.868: INFO: stdout: "replicationcontroller/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Jan 24 01:17:11.881: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 24 01:17:11.881: INFO: Found 0 / 1
Jan 24 01:17:12.883: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 24 01:17:12.883: INFO: Found 0 / 1
Jan 24 01:17:13.899: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 24 01:17:13.899: INFO: Found 0 / 1
Jan 24 01:17:14.880: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 24 01:17:14.880: INFO: Found 0 / 1
Jan 24 01:17:15.903: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 24 01:17:15.903: INFO: Found 0 / 1
Jan 24 01:17:16.873: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 24 01:17:16.873: INFO: Found 0 / 1
Jan 24 01:17:17.877: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 24 01:17:17.878: INFO: Found 0 / 1
Jan 24 01:17:18.875: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 24 01:17:18.876: INFO: Found 1 / 1
Jan 24 01:17:18.876: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jan 24 01:17:18.880: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 24 01:17:18.880: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jan 24 01:17:18.880: INFO: wait on agnhost-master startup in kubectl-757 
Jan 24 01:17:18.880: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs agnhost-master-cmj42 agnhost-master --namespace=kubectl-757'
Jan 24 01:17:18.995: INFO: stderr: ""
Jan 24 01:17:18.995: INFO: stdout: "Paused\n"
STEP: exposing RC
Jan 24 01:17:18.995: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-757'
Jan 24 01:17:19.112: INFO: stderr: ""
Jan 24 01:17:19.112: INFO: stdout: "service/rm2 exposed\n"
Jan 24 01:17:19.117: INFO: Service rm2 in namespace kubectl-757 found.
STEP: exposing service
Jan 24 01:17:21.128: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-757'
Jan 24 01:17:21.335: INFO: stderr: ""
Jan 24 01:17:21.335: INFO: stdout: "service/rm3 exposed\n"
Jan 24 01:17:21.352: INFO: Service rm3 in namespace kubectl-757 found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 24 01:17:23.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-757" for this suite.

• [SLOW TEST:13.126 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1296
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc  [Conformance]","total":278,"completed":246,"skipped":3891,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 24 01:17:23.388: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Jan 24 01:17:23.507: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5e3e7976-8885-4f02-a4b8-8c6c29ba2fd9" in namespace "projected-6318" to be "success or failure"
Jan 24 01:17:23.532: INFO: Pod "downwardapi-volume-5e3e7976-8885-4f02-a4b8-8c6c29ba2fd9": Phase="Pending", Reason="", readiness=false. Elapsed: 25.143398ms
Jan 24 01:17:25.537: INFO: Pod "downwardapi-volume-5e3e7976-8885-4f02-a4b8-8c6c29ba2fd9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030335492s
Jan 24 01:17:27.546: INFO: Pod "downwardapi-volume-5e3e7976-8885-4f02-a4b8-8c6c29ba2fd9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039257855s
Jan 24 01:17:29.552: INFO: Pod "downwardapi-volume-5e3e7976-8885-4f02-a4b8-8c6c29ba2fd9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.045388579s
Jan 24 01:17:31.558: INFO: Pod "downwardapi-volume-5e3e7976-8885-4f02-a4b8-8c6c29ba2fd9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.051068559s
Jan 24 01:17:33.565: INFO: Pod "downwardapi-volume-5e3e7976-8885-4f02-a4b8-8c6c29ba2fd9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.058360909s
STEP: Saw pod success
Jan 24 01:17:33.565: INFO: Pod "downwardapi-volume-5e3e7976-8885-4f02-a4b8-8c6c29ba2fd9" satisfied condition "success or failure"
Jan 24 01:17:33.570: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-5e3e7976-8885-4f02-a4b8-8c6c29ba2fd9 container client-container: 
STEP: delete the pod
Jan 24 01:17:33.643: INFO: Waiting for pod downwardapi-volume-5e3e7976-8885-4f02-a4b8-8c6c29ba2fd9 to disappear
Jan 24 01:17:33.658: INFO: Pod downwardapi-volume-5e3e7976-8885-4f02-a4b8-8c6c29ba2fd9 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 24 01:17:33.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6318" for this suite.

• [SLOW TEST:10.282 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":247,"skipped":3901,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 24 01:17:33.671: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating replication controller my-hostname-basic-6c3cb8b2-f623-45ac-9658-2c81c621f5c8
Jan 24 01:17:33.969: INFO: Pod name my-hostname-basic-6c3cb8b2-f623-45ac-9658-2c81c621f5c8: Found 1 pods out of 1
Jan 24 01:17:33.970: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-6c3cb8b2-f623-45ac-9658-2c81c621f5c8" are running
Jan 24 01:17:44.050: INFO: Pod "my-hostname-basic-6c3cb8b2-f623-45ac-9658-2c81c621f5c8-r7c8q" is running (conditions: [])
Jan 24 01:17:44.050: INFO: Trying to dial the pod
Jan 24 01:17:49.074: INFO: Controller my-hostname-basic-6c3cb8b2-f623-45ac-9658-2c81c621f5c8: Got expected result from replica 1 [my-hostname-basic-6c3cb8b2-f623-45ac-9658-2c81c621f5c8-r7c8q]: "my-hostname-basic-6c3cb8b2-f623-45ac-9658-2c81c621f5c8-r7c8q", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 24 01:17:49.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-3211" for this suite.

• [SLOW TEST:15.411 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","total":278,"completed":248,"skipped":3937,"failed":0}
SSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 24 01:17:49.082: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name configmap-test-volume-7a4d4d72-d51e-4e5e-b1d6-a5d69bf8121d
STEP: Creating a pod to test consume configMaps
Jan 24 01:17:49.180: INFO: Waiting up to 5m0s for pod "pod-configmaps-7bf698b8-368b-4d03-8256-6a5e070357f5" in namespace "configmap-1453" to be "success or failure"
Jan 24 01:17:49.201: INFO: Pod "pod-configmaps-7bf698b8-368b-4d03-8256-6a5e070357f5": Phase="Pending", Reason="", readiness=false. Elapsed: 21.25384ms
Jan 24 01:17:51.207: INFO: Pod "pod-configmaps-7bf698b8-368b-4d03-8256-6a5e070357f5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027020795s
Jan 24 01:17:53.215: INFO: Pod "pod-configmaps-7bf698b8-368b-4d03-8256-6a5e070357f5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035507452s
Jan 24 01:17:55.220: INFO: Pod "pod-configmaps-7bf698b8-368b-4d03-8256-6a5e070357f5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.040734431s
Jan 24 01:17:57.228: INFO: Pod "pod-configmaps-7bf698b8-368b-4d03-8256-6a5e070357f5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.047934547s
STEP: Saw pod success
Jan 24 01:17:57.228: INFO: Pod "pod-configmaps-7bf698b8-368b-4d03-8256-6a5e070357f5" satisfied condition "success or failure"
Jan 24 01:17:57.231: INFO: Trying to get logs from node jerma-node pod pod-configmaps-7bf698b8-368b-4d03-8256-6a5e070357f5 container configmap-volume-test: 
STEP: delete the pod
Jan 24 01:17:57.520: INFO: Waiting for pod pod-configmaps-7bf698b8-368b-4d03-8256-6a5e070357f5 to disappear
Jan 24 01:17:57.526: INFO: Pod pod-configmaps-7bf698b8-368b-4d03-8256-6a5e070357f5 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 24 01:17:57.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1453" for this suite.

• [SLOW TEST:8.456 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":278,"completed":249,"skipped":3941,"failed":0}
SS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 24 01:17:57.539: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name projected-configmap-test-volume-3d51318a-bddd-4afc-8275-c4746d13819f
STEP: Creating a pod to test consume configMaps
Jan 24 01:17:57.728: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b03e2a37-5128-4b0a-8c81-754860de4879" in namespace "projected-6964" to be "success or failure"
Jan 24 01:17:57.759: INFO: Pod "pod-projected-configmaps-b03e2a37-5128-4b0a-8c81-754860de4879": Phase="Pending", Reason="", readiness=false. Elapsed: 30.719878ms
Jan 24 01:17:59.767: INFO: Pod "pod-projected-configmaps-b03e2a37-5128-4b0a-8c81-754860de4879": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038917527s
Jan 24 01:18:01.773: INFO: Pod "pod-projected-configmaps-b03e2a37-5128-4b0a-8c81-754860de4879": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044375949s
Jan 24 01:18:03.793: INFO: Pod "pod-projected-configmaps-b03e2a37-5128-4b0a-8c81-754860de4879": Phase="Pending", Reason="", readiness=false. Elapsed: 6.064762459s
Jan 24 01:18:05.800: INFO: Pod "pod-projected-configmaps-b03e2a37-5128-4b0a-8c81-754860de4879": Phase="Pending", Reason="", readiness=false. Elapsed: 8.071689462s
Jan 24 01:18:07.806: INFO: Pod "pod-projected-configmaps-b03e2a37-5128-4b0a-8c81-754860de4879": Phase="Pending", Reason="", readiness=false. Elapsed: 10.078249654s
Jan 24 01:18:09.815: INFO: Pod "pod-projected-configmaps-b03e2a37-5128-4b0a-8c81-754860de4879": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.08713932s
STEP: Saw pod success
Jan 24 01:18:09.815: INFO: Pod "pod-projected-configmaps-b03e2a37-5128-4b0a-8c81-754860de4879" satisfied condition "success or failure"
Jan 24 01:18:09.824: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-b03e2a37-5128-4b0a-8c81-754860de4879 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 24 01:18:10.038: INFO: Waiting for pod pod-projected-configmaps-b03e2a37-5128-4b0a-8c81-754860de4879 to disappear
Jan 24 01:18:10.055: INFO: Pod pod-projected-configmaps-b03e2a37-5128-4b0a-8c81-754860de4879 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 24 01:18:10.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6964" for this suite.

• [SLOW TEST:12.534 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":250,"skipped":3943,"failed":0}
SSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 24 01:18:10.074: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 24 01:18:10.170: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 24 01:18:11.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-4536" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works  [Conformance]","total":278,"completed":251,"skipped":3951,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 24 01:18:11.252: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ReplicaSet
STEP: Ensuring resource quota status captures replicaset creation
STEP: Deleting a ReplicaSet
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 24 01:18:22.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-1442" for this suite.

• [SLOW TEST:11.246 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":278,"completed":252,"skipped":3957,"failed":0}
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 24 01:18:22.498: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir 0777 on node default medium
Jan 24 01:18:22.635: INFO: Waiting up to 5m0s for pod "pod-57985a23-6fb6-4df8-b098-577f6ef246f4" in namespace "emptydir-9845" to be "success or failure"
Jan 24 01:18:22.678: INFO: Pod "pod-57985a23-6fb6-4df8-b098-577f6ef246f4": Phase="Pending", Reason="", readiness=false. Elapsed: 43.165537ms
Jan 24 01:18:24.684: INFO: Pod "pod-57985a23-6fb6-4df8-b098-577f6ef246f4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049400827s
Jan 24 01:18:26.690: INFO: Pod "pod-57985a23-6fb6-4df8-b098-577f6ef246f4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055376159s
Jan 24 01:18:28.697: INFO: Pod "pod-57985a23-6fb6-4df8-b098-577f6ef246f4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.061992027s
Jan 24 01:18:30.705: INFO: Pod "pod-57985a23-6fb6-4df8-b098-577f6ef246f4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.069875768s
STEP: Saw pod success
Jan 24 01:18:30.705: INFO: Pod "pod-57985a23-6fb6-4df8-b098-577f6ef246f4" satisfied condition "success or failure"
Jan 24 01:18:30.708: INFO: Trying to get logs from node jerma-node pod pod-57985a23-6fb6-4df8-b098-577f6ef246f4 container test-container: 
STEP: delete the pod
Jan 24 01:18:30.739: INFO: Waiting for pod pod-57985a23-6fb6-4df8-b098-577f6ef246f4 to disappear
Jan 24 01:18:30.749: INFO: Pod pod-57985a23-6fb6-4df8-b098-577f6ef246f4 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 24 01:18:30.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9845" for this suite.

• [SLOW TEST:8.264 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":253,"skipped":3958,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Subdomain [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 24 01:18:30.762: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for pods for Subdomain [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-9060.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-9060.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-9060.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9060.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-9060.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-9060.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-9060.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-9060.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9060.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-9060.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-9060.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-9060.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-9060.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-9060.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-9060.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-9060.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-9060.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9060.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 24 01:18:43.124: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9060.svc.cluster.local from pod dns-9060/dns-test-97eca3f7-f7bb-4bc2-9acd-8591fd630c8d: the server could not find the requested resource (get pods dns-test-97eca3f7-f7bb-4bc2-9acd-8591fd630c8d)
Jan 24 01:18:43.129: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9060.svc.cluster.local from pod dns-9060/dns-test-97eca3f7-f7bb-4bc2-9acd-8591fd630c8d: the server could not find the requested resource (get pods dns-test-97eca3f7-f7bb-4bc2-9acd-8591fd630c8d)
Jan 24 01:18:43.132: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9060.svc.cluster.local from pod dns-9060/dns-test-97eca3f7-f7bb-4bc2-9acd-8591fd630c8d: the server could not find the requested resource (get pods dns-test-97eca3f7-f7bb-4bc2-9acd-8591fd630c8d)
Jan 24 01:18:43.134: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9060.svc.cluster.local from pod dns-9060/dns-test-97eca3f7-f7bb-4bc2-9acd-8591fd630c8d: the server could not find the requested resource (get pods dns-test-97eca3f7-f7bb-4bc2-9acd-8591fd630c8d)
Jan 24 01:18:43.143: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9060.svc.cluster.local from pod dns-9060/dns-test-97eca3f7-f7bb-4bc2-9acd-8591fd630c8d: the server could not find the requested resource (get pods dns-test-97eca3f7-f7bb-4bc2-9acd-8591fd630c8d)
Jan 24 01:18:43.145: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9060.svc.cluster.local from pod dns-9060/dns-test-97eca3f7-f7bb-4bc2-9acd-8591fd630c8d: the server could not find the requested resource (get pods dns-test-97eca3f7-f7bb-4bc2-9acd-8591fd630c8d)
Jan 24 01:18:43.147: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9060.svc.cluster.local from pod dns-9060/dns-test-97eca3f7-f7bb-4bc2-9acd-8591fd630c8d: the server could not find the requested resource (get pods dns-test-97eca3f7-f7bb-4bc2-9acd-8591fd630c8d)
Jan 24 01:18:43.149: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9060.svc.cluster.local from pod dns-9060/dns-test-97eca3f7-f7bb-4bc2-9acd-8591fd630c8d: the server could not find the requested resource (get pods dns-test-97eca3f7-f7bb-4bc2-9acd-8591fd630c8d)
Jan 24 01:18:43.156: INFO: Lookups using dns-9060/dns-test-97eca3f7-f7bb-4bc2-9acd-8591fd630c8d failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9060.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9060.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9060.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9060.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9060.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9060.svc.cluster.local jessie_udp@dns-test-service-2.dns-9060.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9060.svc.cluster.local]

Jan 24 01:18:48.169: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9060.svc.cluster.local from pod dns-9060/dns-test-97eca3f7-f7bb-4bc2-9acd-8591fd630c8d: the server could not find the requested resource (get pods dns-test-97eca3f7-f7bb-4bc2-9acd-8591fd630c8d)
Jan 24 01:18:48.179: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9060.svc.cluster.local from pod dns-9060/dns-test-97eca3f7-f7bb-4bc2-9acd-8591fd630c8d: the server could not find the requested resource (get pods dns-test-97eca3f7-f7bb-4bc2-9acd-8591fd630c8d)
Jan 24 01:18:48.187: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9060.svc.cluster.local from pod dns-9060/dns-test-97eca3f7-f7bb-4bc2-9acd-8591fd630c8d: the server could not find the requested resource (get pods dns-test-97eca3f7-f7bb-4bc2-9acd-8591fd630c8d)
Jan 24 01:18:48.195: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9060.svc.cluster.local from pod dns-9060/dns-test-97eca3f7-f7bb-4bc2-9acd-8591fd630c8d: the server could not find the requested resource (get pods dns-test-97eca3f7-f7bb-4bc2-9acd-8591fd630c8d)
Jan 24 01:18:48.213: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9060.svc.cluster.local from pod dns-9060/dns-test-97eca3f7-f7bb-4bc2-9acd-8591fd630c8d: the server could not find the requested resource (get pods dns-test-97eca3f7-f7bb-4bc2-9acd-8591fd630c8d)
Jan 24 01:18:48.222: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9060.svc.cluster.local from pod dns-9060/dns-test-97eca3f7-f7bb-4bc2-9acd-8591fd630c8d: the server could not find the requested resource (get pods dns-test-97eca3f7-f7bb-4bc2-9acd-8591fd630c8d)
Jan 24 01:18:48.232: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9060.svc.cluster.local from pod dns-9060/dns-test-97eca3f7-f7bb-4bc2-9acd-8591fd630c8d: the server could not find the requested resource (get pods dns-test-97eca3f7-f7bb-4bc2-9acd-8591fd630c8d)
Jan 24 01:18:48.243: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9060.svc.cluster.local from pod dns-9060/dns-test-97eca3f7-f7bb-4bc2-9acd-8591fd630c8d: the server could not find the requested resource (get pods dns-test-97eca3f7-f7bb-4bc2-9acd-8591fd630c8d)
Jan 24 01:18:48.254: INFO: Lookups using dns-9060/dns-test-97eca3f7-f7bb-4bc2-9acd-8591fd630c8d failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9060.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9060.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9060.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9060.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9060.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9060.svc.cluster.local jessie_udp@dns-test-service-2.dns-9060.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9060.svc.cluster.local]

Jan 24 01:18:53.167: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9060.svc.cluster.local from pod dns-9060/dns-test-97eca3f7-f7bb-4bc2-9acd-8591fd630c8d: the server could not find the requested resource (get pods dns-test-97eca3f7-f7bb-4bc2-9acd-8591fd630c8d)
Jan 24 01:18:53.173: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9060.svc.cluster.local from pod dns-9060/dns-test-97eca3f7-f7bb-4bc2-9acd-8591fd630c8d: the server could not find the requested resource (get pods dns-test-97eca3f7-f7bb-4bc2-9acd-8591fd630c8d)
Jan 24 01:18:53.180: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9060.svc.cluster.local from pod dns-9060/dns-test-97eca3f7-f7bb-4bc2-9acd-8591fd630c8d: the server could not find the requested resource (get pods dns-test-97eca3f7-f7bb-4bc2-9acd-8591fd630c8d)
Jan 24 01:18:53.185: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9060.svc.cluster.local from pod dns-9060/dns-test-97eca3f7-f7bb-4bc2-9acd-8591fd630c8d: the server could not find the requested resource (get pods dns-test-97eca3f7-f7bb-4bc2-9acd-8591fd630c8d)
Jan 24 01:18:53.199: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9060.svc.cluster.local from pod dns-9060/dns-test-97eca3f7-f7bb-4bc2-9acd-8591fd630c8d: the server could not find the requested resource (get pods dns-test-97eca3f7-f7bb-4bc2-9acd-8591fd630c8d)
Jan 24 01:18:53.202: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9060.svc.cluster.local from pod dns-9060/dns-test-97eca3f7-f7bb-4bc2-9acd-8591fd630c8d: the server could not find the requested resource (get pods dns-test-97eca3f7-f7bb-4bc2-9acd-8591fd630c8d)
Jan 24 01:18:53.206: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9060.svc.cluster.local from pod dns-9060/dns-test-97eca3f7-f7bb-4bc2-9acd-8591fd630c8d: the server could not find the requested resource (get pods dns-test-97eca3f7-f7bb-4bc2-9acd-8591fd630c8d)
Jan 24 01:18:53.210: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9060.svc.cluster.local from pod dns-9060/dns-test-97eca3f7-f7bb-4bc2-9acd-8591fd630c8d: the server could not find the requested resource (get pods dns-test-97eca3f7-f7bb-4bc2-9acd-8591fd630c8d)
Jan 24 01:18:53.219: INFO: Lookups using dns-9060/dns-test-97eca3f7-f7bb-4bc2-9acd-8591fd630c8d failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9060.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9060.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9060.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9060.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9060.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9060.svc.cluster.local jessie_udp@dns-test-service-2.dns-9060.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9060.svc.cluster.local]

Jan 24 01:18:58.165: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9060.svc.cluster.local from pod dns-9060/dns-test-97eca3f7-f7bb-4bc2-9acd-8591fd630c8d: the server could not find the requested resource (get pods dns-test-97eca3f7-f7bb-4bc2-9acd-8591fd630c8d)
Jan 24 01:18:58.173: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9060.svc.cluster.local from pod dns-9060/dns-test-97eca3f7-f7bb-4bc2-9acd-8591fd630c8d: the server could not find the requested resource (get pods dns-test-97eca3f7-f7bb-4bc2-9acd-8591fd630c8d)
Jan 24 01:18:58.179: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9060.svc.cluster.local from pod dns-9060/dns-test-97eca3f7-f7bb-4bc2-9acd-8591fd630c8d: the server could not find the requested resource (get pods dns-test-97eca3f7-f7bb-4bc2-9acd-8591fd630c8d)
Jan 24 01:18:58.184: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9060.svc.cluster.local from pod dns-9060/dns-test-97eca3f7-f7bb-4bc2-9acd-8591fd630c8d: the server could not find the requested resource (get pods dns-test-97eca3f7-f7bb-4bc2-9acd-8591fd630c8d)
Jan 24 01:18:58.199: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9060.svc.cluster.local from pod dns-9060/dns-test-97eca3f7-f7bb-4bc2-9acd-8591fd630c8d: the server could not find the requested resource (get pods dns-test-97eca3f7-f7bb-4bc2-9acd-8591fd630c8d)
Jan 24 01:18:58.204: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9060.svc.cluster.local from pod dns-9060/dns-test-97eca3f7-f7bb-4bc2-9acd-8591fd630c8d: the server could not find the requested resource (get pods dns-test-97eca3f7-f7bb-4bc2-9acd-8591fd630c8d)
Jan 24 01:18:58.210: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9060.svc.cluster.local from pod dns-9060/dns-test-97eca3f7-f7bb-4bc2-9acd-8591fd630c8d: the server could not find the requested resource (get pods dns-test-97eca3f7-f7bb-4bc2-9acd-8591fd630c8d)
Jan 24 01:18:58.218: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9060.svc.cluster.local from pod dns-9060/dns-test-97eca3f7-f7bb-4bc2-9acd-8591fd630c8d: the server could not find the requested resource (get pods dns-test-97eca3f7-f7bb-4bc2-9acd-8591fd630c8d)
Jan 24 01:18:58.231: INFO: Lookups using dns-9060/dns-test-97eca3f7-f7bb-4bc2-9acd-8591fd630c8d failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9060.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9060.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9060.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9060.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9060.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9060.svc.cluster.local jessie_udp@dns-test-service-2.dns-9060.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9060.svc.cluster.local]

Jan 24 01:19:03.162: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9060.svc.cluster.local from pod dns-9060/dns-test-97eca3f7-f7bb-4bc2-9acd-8591fd630c8d: the server could not find the requested resource (get pods dns-test-97eca3f7-f7bb-4bc2-9acd-8591fd630c8d)
Jan 24 01:19:03.167: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9060.svc.cluster.local from pod dns-9060/dns-test-97eca3f7-f7bb-4bc2-9acd-8591fd630c8d: the server could not find the requested resource (get pods dns-test-97eca3f7-f7bb-4bc2-9acd-8591fd630c8d)
Jan 24 01:19:03.172: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9060.svc.cluster.local from pod dns-9060/dns-test-97eca3f7-f7bb-4bc2-9acd-8591fd630c8d: the server could not find the requested resource (get pods dns-test-97eca3f7-f7bb-4bc2-9acd-8591fd630c8d)
Jan 24 01:19:03.175: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9060.svc.cluster.local from pod dns-9060/dns-test-97eca3f7-f7bb-4bc2-9acd-8591fd630c8d: the server could not find the requested resource (get pods dns-test-97eca3f7-f7bb-4bc2-9acd-8591fd630c8d)
Jan 24 01:19:03.187: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9060.svc.cluster.local from pod dns-9060/dns-test-97eca3f7-f7bb-4bc2-9acd-8591fd630c8d: the server could not find the requested resource (get pods dns-test-97eca3f7-f7bb-4bc2-9acd-8591fd630c8d)
Jan 24 01:19:03.190: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9060.svc.cluster.local from pod dns-9060/dns-test-97eca3f7-f7bb-4bc2-9acd-8591fd630c8d: the server could not find the requested resource (get pods dns-test-97eca3f7-f7bb-4bc2-9acd-8591fd630c8d)
Jan 24 01:19:03.194: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9060.svc.cluster.local from pod dns-9060/dns-test-97eca3f7-f7bb-4bc2-9acd-8591fd630c8d: the server could not find the requested resource (get pods dns-test-97eca3f7-f7bb-4bc2-9acd-8591fd630c8d)
Jan 24 01:19:03.196: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9060.svc.cluster.local from pod dns-9060/dns-test-97eca3f7-f7bb-4bc2-9acd-8591fd630c8d: the server could not find the requested resource (get pods dns-test-97eca3f7-f7bb-4bc2-9acd-8591fd630c8d)
Jan 24 01:19:03.203: INFO: Lookups using dns-9060/dns-test-97eca3f7-f7bb-4bc2-9acd-8591fd630c8d failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9060.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9060.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9060.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9060.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9060.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9060.svc.cluster.local jessie_udp@dns-test-service-2.dns-9060.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9060.svc.cluster.local]

Jan 24 01:19:08.163: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9060.svc.cluster.local from pod dns-9060/dns-test-97eca3f7-f7bb-4bc2-9acd-8591fd630c8d: the server could not find the requested resource (get pods dns-test-97eca3f7-f7bb-4bc2-9acd-8591fd630c8d)
Jan 24 01:19:08.168: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9060.svc.cluster.local from pod dns-9060/dns-test-97eca3f7-f7bb-4bc2-9acd-8591fd630c8d: the server could not find the requested resource (get pods dns-test-97eca3f7-f7bb-4bc2-9acd-8591fd630c8d)
Jan 24 01:19:08.173: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9060.svc.cluster.local from pod dns-9060/dns-test-97eca3f7-f7bb-4bc2-9acd-8591fd630c8d: the server could not find the requested resource (get pods dns-test-97eca3f7-f7bb-4bc2-9acd-8591fd630c8d)
Jan 24 01:19:08.178: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9060.svc.cluster.local from pod dns-9060/dns-test-97eca3f7-f7bb-4bc2-9acd-8591fd630c8d: the server could not find the requested resource (get pods dns-test-97eca3f7-f7bb-4bc2-9acd-8591fd630c8d)
Jan 24 01:19:08.192: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9060.svc.cluster.local from pod dns-9060/dns-test-97eca3f7-f7bb-4bc2-9acd-8591fd630c8d: the server could not find the requested resource (get pods dns-test-97eca3f7-f7bb-4bc2-9acd-8591fd630c8d)
Jan 24 01:19:08.197: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9060.svc.cluster.local from pod dns-9060/dns-test-97eca3f7-f7bb-4bc2-9acd-8591fd630c8d: the server could not find the requested resource (get pods dns-test-97eca3f7-f7bb-4bc2-9acd-8591fd630c8d)
Jan 24 01:19:08.200: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9060.svc.cluster.local from pod dns-9060/dns-test-97eca3f7-f7bb-4bc2-9acd-8591fd630c8d: the server could not find the requested resource (get pods dns-test-97eca3f7-f7bb-4bc2-9acd-8591fd630c8d)
Jan 24 01:19:08.204: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9060.svc.cluster.local from pod dns-9060/dns-test-97eca3f7-f7bb-4bc2-9acd-8591fd630c8d: the server could not find the requested resource (get pods dns-test-97eca3f7-f7bb-4bc2-9acd-8591fd630c8d)
Jan 24 01:19:08.216: INFO: Lookups using dns-9060/dns-test-97eca3f7-f7bb-4bc2-9acd-8591fd630c8d failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9060.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9060.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9060.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9060.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9060.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9060.svc.cluster.local jessie_udp@dns-test-service-2.dns-9060.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9060.svc.cluster.local]

Jan 24 01:19:13.269: INFO: DNS probes using dns-9060/dns-test-97eca3f7-f7bb-4bc2-9acd-8591fd630c8d succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 24 01:19:13.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-9060" for this suite.

• [SLOW TEST:42.729 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for pods for Subdomain [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":278,"completed":254,"skipped":3981,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 24 01:19:13.492: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:279
[BeforeEach] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1633
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Jan 24 01:19:13.649: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-2101'
Jan 24 01:19:13.869: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 24 01:19:13.869: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n"
STEP: verifying the rc e2e-test-httpd-rc was created
STEP: verifying the pod controlled by rc e2e-test-httpd-rc was created
STEP: confirm that you can get logs from an rc
Jan 24 01:19:13.933: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-httpd-rc-psx5v]
Jan 24 01:19:13.933: INFO: Waiting up to 5m0s for pod "e2e-test-httpd-rc-psx5v" in namespace "kubectl-2101" to be "running and ready"
Jan 24 01:19:13.937: INFO: Pod "e2e-test-httpd-rc-psx5v": Phase="Pending", Reason="", readiness=false. Elapsed: 3.796626ms
Jan 24 01:19:15.942: INFO: Pod "e2e-test-httpd-rc-psx5v": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008977958s
Jan 24 01:19:17.949: INFO: Pod "e2e-test-httpd-rc-psx5v": Phase="Pending", Reason="", readiness=false. Elapsed: 4.015252789s
Jan 24 01:19:19.954: INFO: Pod "e2e-test-httpd-rc-psx5v": Phase="Pending", Reason="", readiness=false. Elapsed: 6.021037292s
Jan 24 01:19:21.963: INFO: Pod "e2e-test-httpd-rc-psx5v": Phase="Pending", Reason="", readiness=false. Elapsed: 8.029214485s
Jan 24 01:19:23.973: INFO: Pod "e2e-test-httpd-rc-psx5v": Phase="Pending", Reason="", readiness=false. Elapsed: 10.039564066s
Jan 24 01:19:25.978: INFO: Pod "e2e-test-httpd-rc-psx5v": Phase="Running", Reason="", readiness=true. Elapsed: 12.044522247s
Jan 24 01:19:25.978: INFO: Pod "e2e-test-httpd-rc-psx5v" satisfied condition "running and ready"
Jan 24 01:19:25.978: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-httpd-rc-psx5v]
Jan 24 01:19:25.978: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-httpd-rc --namespace=kubectl-2101'
Jan 24 01:19:26.159: INFO: stderr: ""
Jan 24 01:19:26.159: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.44.0.1. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.44.0.1. Set the 'ServerName' directive globally to suppress this message\n[Fri Jan 24 01:19:23.512305 2020] [mpm_event:notice] [pid 1:tid 140086303271784] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Fri Jan 24 01:19:23.512390 2020] [core:notice] [pid 1:tid 140086303271784] AH00094: Command line: 'httpd -D FOREGROUND'\n"
[AfterEach] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1638
Jan 24 01:19:26.159: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-2101'
Jan 24 01:19:26.304: INFO: stderr: ""
Jan 24 01:19:26.304: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 24 01:19:26.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2101" for this suite.

• [SLOW TEST:12.826 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1629
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run rc should create an rc from an image  [Conformance]","total":278,"completed":255,"skipped":4004,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 24 01:19:26.319: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name cm-test-opt-del-49f221d1-8f11-4f3d-8d61-49add882758b
STEP: Creating configMap with name cm-test-opt-upd-b7a02142-c2a2-454d-9d6b-859f2dd2d66d
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-49f221d1-8f11-4f3d-8d61-49add882758b
STEP: Updating configmap cm-test-opt-upd-b7a02142-c2a2-454d-9d6b-859f2dd2d66d
STEP: Creating configMap with name cm-test-opt-create-85d017ea-6190-4629-a799-f123d01f12c1
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 24 01:20:47.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8465" for this suite.

• [SLOW TEST:81.218 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":256,"skipped":4052,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 24 01:20:47.538: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap that has name configmap-test-emptyKey-b25d4ed7-4047-4986-a904-4c5b4d5fc177
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 24 01:20:47.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2405" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":278,"completed":257,"skipped":4086,"failed":0}
SSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 24 01:20:47.651: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 24 01:20:57.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-7881" for this suite.

• [SLOW TEST:10.205 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":278,"completed":258,"skipped":4098,"failed":0}
SS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 24 01:20:57.856: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 24 01:21:35.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-9556" for this suite.
STEP: Destroying namespace "nsdeletetest-7951" for this suite.
Jan 24 01:21:35.306: INFO: Namespace nsdeletetest-7951 was already deleted
STEP: Destroying namespace "nsdeletetest-7858" for this suite.

• [SLOW TEST:37.457 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":278,"completed":259,"skipped":4100,"failed":0}
S
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 24 01:21:35.314: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:687
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 24 01:21:35.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-5223" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691
•{"msg":"PASSED [sig-network] Services should provide secure master service  [Conformance]","total":278,"completed":260,"skipped":4101,"failed":0}
SSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 24 01:21:35.516: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating projection with secret that has name projected-secret-test-dc2bc8b0-a460-4507-b388-3fa1165325c1
STEP: Creating a pod to test consume secrets
Jan 24 01:21:35.784: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-13bc1022-aeb7-4827-bb92-c39199f80867" in namespace "projected-3266" to be "success or failure"
Jan 24 01:21:35.828: INFO: Pod "pod-projected-secrets-13bc1022-aeb7-4827-bb92-c39199f80867": Phase="Pending", Reason="", readiness=false. Elapsed: 43.681886ms
Jan 24 01:21:37.833: INFO: Pod "pod-projected-secrets-13bc1022-aeb7-4827-bb92-c39199f80867": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048224981s
Jan 24 01:21:39.839: INFO: Pod "pod-projected-secrets-13bc1022-aeb7-4827-bb92-c39199f80867": Phase="Pending", Reason="", readiness=false. Elapsed: 4.054611024s
Jan 24 01:21:41.845: INFO: Pod "pod-projected-secrets-13bc1022-aeb7-4827-bb92-c39199f80867": Phase="Pending", Reason="", readiness=false. Elapsed: 6.060947815s
Jan 24 01:21:43.862: INFO: Pod "pod-projected-secrets-13bc1022-aeb7-4827-bb92-c39199f80867": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.077348269s
STEP: Saw pod success
Jan 24 01:21:43.862: INFO: Pod "pod-projected-secrets-13bc1022-aeb7-4827-bb92-c39199f80867" satisfied condition "success or failure"
Jan 24 01:21:43.867: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-13bc1022-aeb7-4827-bb92-c39199f80867 container projected-secret-volume-test: 
STEP: delete the pod
Jan 24 01:21:43.984: INFO: Waiting for pod pod-projected-secrets-13bc1022-aeb7-4827-bb92-c39199f80867 to disappear
Jan 24 01:21:43.993: INFO: Pod pod-projected-secrets-13bc1022-aeb7-4827-bb92-c39199f80867 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 24 01:21:43.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3266" for this suite.

• [SLOW TEST:8.487 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":261,"skipped":4105,"failed":0}
SSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 24 01:21:44.003: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating pod busybox-dac8d576-7986-4972-aa65-9502d7e33cfe in namespace container-probe-3054
Jan 24 01:21:52.430: INFO: Started pod busybox-dac8d576-7986-4972-aa65-9502d7e33cfe in namespace container-probe-3054
STEP: checking the pod's current state and verifying that restartCount is present
Jan 24 01:21:52.433: INFO: Initial restart count of pod busybox-dac8d576-7986-4972-aa65-9502d7e33cfe is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 24 01:25:53.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-3054" for this suite.

• [SLOW TEST:249.617 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":262,"skipped":4116,"failed":0}
SSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 24 01:25:53.622: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating pod pod-subpath-test-projected-hhbd
STEP: Creating a pod to test atomic-volume-subpath
Jan 24 01:25:53.746: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-hhbd" in namespace "subpath-5944" to be "success or failure"
Jan 24 01:25:53.762: INFO: Pod "pod-subpath-test-projected-hhbd": Phase="Pending", Reason="", readiness=false. Elapsed: 15.357303ms
Jan 24 01:25:55.785: INFO: Pod "pod-subpath-test-projected-hhbd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038510462s
Jan 24 01:25:57.797: INFO: Pod "pod-subpath-test-projected-hhbd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050410443s
Jan 24 01:25:59.871: INFO: Pod "pod-subpath-test-projected-hhbd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.124219335s
Jan 24 01:26:01.881: INFO: Pod "pod-subpath-test-projected-hhbd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.134039484s
Jan 24 01:26:03.894: INFO: Pod "pod-subpath-test-projected-hhbd": Phase="Running", Reason="", readiness=true. Elapsed: 10.147139731s
Jan 24 01:26:05.900: INFO: Pod "pod-subpath-test-projected-hhbd": Phase="Running", Reason="", readiness=true. Elapsed: 12.153188018s
Jan 24 01:26:07.907: INFO: Pod "pod-subpath-test-projected-hhbd": Phase="Running", Reason="", readiness=true. Elapsed: 14.16065753s
Jan 24 01:26:09.919: INFO: Pod "pod-subpath-test-projected-hhbd": Phase="Running", Reason="", readiness=true. Elapsed: 16.172312529s
Jan 24 01:26:11.925: INFO: Pod "pod-subpath-test-projected-hhbd": Phase="Running", Reason="", readiness=true. Elapsed: 18.178220992s
Jan 24 01:26:13.931: INFO: Pod "pod-subpath-test-projected-hhbd": Phase="Running", Reason="", readiness=true. Elapsed: 20.184807994s
Jan 24 01:26:15.936: INFO: Pod "pod-subpath-test-projected-hhbd": Phase="Running", Reason="", readiness=true. Elapsed: 22.18980863s
Jan 24 01:26:17.944: INFO: Pod "pod-subpath-test-projected-hhbd": Phase="Running", Reason="", readiness=true. Elapsed: 24.197808491s
Jan 24 01:26:19.950: INFO: Pod "pod-subpath-test-projected-hhbd": Phase="Running", Reason="", readiness=true. Elapsed: 26.203598746s
Jan 24 01:26:21.954: INFO: Pod "pod-subpath-test-projected-hhbd": Phase="Running", Reason="", readiness=true. Elapsed: 28.207875903s
Jan 24 01:26:23.962: INFO: Pod "pod-subpath-test-projected-hhbd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.215352152s
STEP: Saw pod success
Jan 24 01:26:23.962: INFO: Pod "pod-subpath-test-projected-hhbd" satisfied condition "success or failure"
Jan 24 01:26:23.965: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-projected-hhbd container test-container-subpath-projected-hhbd: 
STEP: delete the pod
Jan 24 01:26:24.150: INFO: Waiting for pod pod-subpath-test-projected-hhbd to disappear
Jan 24 01:26:24.153: INFO: Pod pod-subpath-test-projected-hhbd no longer exists
STEP: Deleting pod pod-subpath-test-projected-hhbd
Jan 24 01:26:24.153: INFO: Deleting pod "pod-subpath-test-projected-hhbd" in namespace "subpath-5944"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 24 01:26:24.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-5944" for this suite.

• [SLOW TEST:30.549 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with projected pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":278,"completed":263,"skipped":4120,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should support configurable pod DNS nameservers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 24 01:26:24.171: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support configurable pod DNS nameservers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod with dnsPolicy=None and customized dnsConfig...
Jan 24 01:26:24.343: INFO: Created pod &Pod{ObjectMeta:{dns-3675  dns-3675 /api/v1/namespaces/dns-3675/pods/dns-3675 b01e4f31-0c4e-4522-872b-2b4de3b4b65d 3930205 0 2020-01-24 01:26:24 +0000 UTC   map[] map[] [] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dlbgm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dlbgm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dlbgm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
STEP: Verifying customized DNS suffix list is configured on pod...
Jan 24 01:26:32.370: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-3675 PodName:dns-3675 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 24 01:26:32.370: INFO: >>> kubeConfig: /root/.kube/config
I0124 01:26:32.426950       8 log.go:172] (0xc002d24790) (0xc000d099a0) Create stream
I0124 01:26:32.427080       8 log.go:172] (0xc002d24790) (0xc000d099a0) Stream added, broadcasting: 1
I0124 01:26:32.434734       8 log.go:172] (0xc002d24790) Reply frame received for 1
I0124 01:26:32.434872       8 log.go:172] (0xc002d24790) (0xc000d2ba40) Create stream
I0124 01:26:32.434888       8 log.go:172] (0xc002d24790) (0xc000d2ba40) Stream added, broadcasting: 3
I0124 01:26:32.437695       8 log.go:172] (0xc002d24790) Reply frame received for 3
I0124 01:26:32.437751       8 log.go:172] (0xc002d24790) (0xc0011da280) Create stream
I0124 01:26:32.437763       8 log.go:172] (0xc002d24790) (0xc0011da280) Stream added, broadcasting: 5
I0124 01:26:32.440280       8 log.go:172] (0xc002d24790) Reply frame received for 5
I0124 01:26:32.575302       8 log.go:172] (0xc002d24790) Data frame received for 3
I0124 01:26:32.575422       8 log.go:172] (0xc000d2ba40) (3) Data frame handling
I0124 01:26:32.575441       8 log.go:172] (0xc000d2ba40) (3) Data frame sent
I0124 01:26:32.693243       8 log.go:172] (0xc002d24790) (0xc000d2ba40) Stream removed, broadcasting: 3
I0124 01:26:32.693470       8 log.go:172] (0xc002d24790) (0xc0011da280) Stream removed, broadcasting: 5
I0124 01:26:32.693527       8 log.go:172] (0xc002d24790) Data frame received for 1
I0124 01:26:32.693556       8 log.go:172] (0xc000d099a0) (1) Data frame handling
I0124 01:26:32.693567       8 log.go:172] (0xc000d099a0) (1) Data frame sent
I0124 01:26:32.693580       8 log.go:172] (0xc002d24790) (0xc000d099a0) Stream removed, broadcasting: 1
I0124 01:26:32.693613       8 log.go:172] (0xc002d24790) Go away received
I0124 01:26:32.693811       8 log.go:172] (0xc002d24790) (0xc000d099a0) Stream removed, broadcasting: 1
I0124 01:26:32.693855       8 log.go:172] (0xc002d24790) (0xc000d2ba40) Stream removed, broadcasting: 3
I0124 01:26:32.693882       8 log.go:172] (0xc002d24790) (0xc0011da280) Stream removed, broadcasting: 5
STEP: Verifying customized DNS server is configured on pod...
Jan 24 01:26:32.693: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-3675 PodName:dns-3675 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 24 01:26:32.694: INFO: >>> kubeConfig: /root/.kube/config
I0124 01:26:32.737279       8 log.go:172] (0xc00293a4d0) (0xc001aee8c0) Create stream
I0124 01:26:32.737328       8 log.go:172] (0xc00293a4d0) (0xc001aee8c0) Stream added, broadcasting: 1
I0124 01:26:32.746623       8 log.go:172] (0xc00293a4d0) Reply frame received for 1
I0124 01:26:32.746690       8 log.go:172] (0xc00293a4d0) (0xc000d2bae0) Create stream
I0124 01:26:32.746705       8 log.go:172] (0xc00293a4d0) (0xc000d2bae0) Stream added, broadcasting: 3
I0124 01:26:32.748676       8 log.go:172] (0xc00293a4d0) Reply frame received for 3
I0124 01:26:32.748724       8 log.go:172] (0xc00293a4d0) (0xc000d2bc20) Create stream
I0124 01:26:32.748740       8 log.go:172] (0xc00293a4d0) (0xc000d2bc20) Stream added, broadcasting: 5
I0124 01:26:32.754695       8 log.go:172] (0xc00293a4d0) Reply frame received for 5
I0124 01:26:32.858606       8 log.go:172] (0xc00293a4d0) Data frame received for 3
I0124 01:26:32.858713       8 log.go:172] (0xc000d2bae0) (3) Data frame handling
I0124 01:26:32.858729       8 log.go:172] (0xc000d2bae0) (3) Data frame sent
I0124 01:26:32.954049       8 log.go:172] (0xc00293a4d0) (0xc000d2bae0) Stream removed, broadcasting: 3
I0124 01:26:32.954278       8 log.go:172] (0xc00293a4d0) Data frame received for 1
I0124 01:26:32.954328       8 log.go:172] (0xc001aee8c0) (1) Data frame handling
I0124 01:26:32.954357       8 log.go:172] (0xc001aee8c0) (1) Data frame sent
I0124 01:26:32.954375       8 log.go:172] (0xc00293a4d0) (0xc001aee8c0) Stream removed, broadcasting: 1
I0124 01:26:32.954543       8 log.go:172] (0xc00293a4d0) (0xc000d2bc20) Stream removed, broadcasting: 5
I0124 01:26:32.954703       8 log.go:172] (0xc00293a4d0) Go away received
I0124 01:26:32.954821       8 log.go:172] (0xc00293a4d0) (0xc001aee8c0) Stream removed, broadcasting: 1
I0124 01:26:32.954848       8 log.go:172] (0xc00293a4d0) (0xc000d2bae0) Stream removed, broadcasting: 3
I0124 01:26:32.954861       8 log.go:172] (0xc00293a4d0) (0xc000d2bc20) Stream removed, broadcasting: 5
Jan 24 01:26:32.954: INFO: Deleting pod dns-3675...
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 24 01:26:32.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-3675" for this suite.

• [SLOW TEST:8.847 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should support configurable pod DNS nameservers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":278,"completed":264,"skipped":4147,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 24 01:26:33.022: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 24 01:26:43.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-2030" for this suite.

• [SLOW TEST:10.194 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when scheduling a busybox Pod with hostAliases
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":265,"skipped":4291,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 24 01:26:43.217: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name configmap-test-volume-ec3a3870-e5b4-4117-92a6-51305fe060fe
STEP: Creating a pod to test consume configMaps
Jan 24 01:26:43.345: INFO: Waiting up to 5m0s for pod "pod-configmaps-6a27aad3-0376-4e25-b777-501b33404b9e" in namespace "configmap-6261" to be "success or failure"
Jan 24 01:26:43.366: INFO: Pod "pod-configmaps-6a27aad3-0376-4e25-b777-501b33404b9e": Phase="Pending", Reason="", readiness=false. Elapsed: 20.103604ms
Jan 24 01:26:45.372: INFO: Pod "pod-configmaps-6a27aad3-0376-4e25-b777-501b33404b9e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025965933s
Jan 24 01:26:47.377: INFO: Pod "pod-configmaps-6a27aad3-0376-4e25-b777-501b33404b9e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031443609s
Jan 24 01:26:49.420: INFO: Pod "pod-configmaps-6a27aad3-0376-4e25-b777-501b33404b9e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.074484994s
Jan 24 01:26:51.431: INFO: Pod "pod-configmaps-6a27aad3-0376-4e25-b777-501b33404b9e": Phase="Running", Reason="", readiness=true. Elapsed: 8.085363009s
Jan 24 01:26:53.438: INFO: Pod "pod-configmaps-6a27aad3-0376-4e25-b777-501b33404b9e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.092433792s
STEP: Saw pod success
Jan 24 01:26:53.438: INFO: Pod "pod-configmaps-6a27aad3-0376-4e25-b777-501b33404b9e" satisfied condition "success or failure"
Jan 24 01:26:53.443: INFO: Trying to get logs from node jerma-node pod pod-configmaps-6a27aad3-0376-4e25-b777-501b33404b9e container configmap-volume-test: 
STEP: delete the pod
Jan 24 01:26:53.543: INFO: Waiting for pod pod-configmaps-6a27aad3-0376-4e25-b777-501b33404b9e to disappear
Jan 24 01:26:53.555: INFO: Pod pod-configmaps-6a27aad3-0376-4e25-b777-501b33404b9e no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 24 01:26:53.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6261" for this suite.

• [SLOW TEST:10.352 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":266,"skipped":4333,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 24 01:26:53.571: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 24 01:26:53.680: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 24 01:26:54.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-6314" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works  [Conformance]","total":278,"completed":267,"skipped":4361,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 24 01:26:54.404: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-8448
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace statefulset-8448
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-8448
Jan 24 01:26:54.525: INFO: Found 0 stateful pods, waiting for 1
Jan 24 01:27:04.533: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Jan 24 01:27:04.538: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8448 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jan 24 01:27:07.276: INFO: stderr: "I0124 01:27:07.036279    4291 log.go:172] (0xc0000f4580) (0xc00047a8c0) Create stream\nI0124 01:27:07.036607    4291 log.go:172] (0xc0000f4580) (0xc00047a8c0) Stream added, broadcasting: 1\nI0124 01:27:07.040924    4291 log.go:172] (0xc0000f4580) Reply frame received for 1\nI0124 01:27:07.040959    4291 log.go:172] (0xc0000f4580) (0xc00057ee60) Create stream\nI0124 01:27:07.040968    4291 log.go:172] (0xc0000f4580) (0xc00057ee60) Stream added, broadcasting: 3\nI0124 01:27:07.042624    4291 log.go:172] (0xc0000f4580) Reply frame received for 3\nI0124 01:27:07.042674    4291 log.go:172] (0xc0000f4580) (0xc0005afe00) Create stream\nI0124 01:27:07.042685    4291 log.go:172] (0xc0000f4580) (0xc0005afe00) Stream added, broadcasting: 5\nI0124 01:27:07.044279    4291 log.go:172] (0xc0000f4580) Reply frame received for 5\nI0124 01:27:07.135735    4291 log.go:172] (0xc0000f4580) Data frame received for 5\nI0124 01:27:07.135781    4291 log.go:172] (0xc0005afe00) (5) Data frame handling\nI0124 01:27:07.135796    4291 log.go:172] (0xc0005afe00) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0124 01:27:07.171358    4291 log.go:172] (0xc0000f4580) Data frame received for 3\nI0124 01:27:07.171391    4291 log.go:172] (0xc00057ee60) (3) Data frame handling\nI0124 01:27:07.171427    4291 log.go:172] (0xc00057ee60) (3) Data frame sent\nI0124 01:27:07.268420    4291 log.go:172] (0xc0000f4580) Data frame received for 1\nI0124 01:27:07.268500    4291 log.go:172] (0xc0000f4580) (0xc0005afe00) Stream removed, broadcasting: 5\nI0124 01:27:07.268540    4291 log.go:172] (0xc00047a8c0) (1) Data frame handling\nI0124 01:27:07.268572    4291 log.go:172] (0xc00047a8c0) (1) Data frame sent\nI0124 01:27:07.268592    4291 log.go:172] (0xc0000f4580) (0xc00057ee60) Stream removed, broadcasting: 3\nI0124 01:27:07.268640    4291 log.go:172] (0xc0000f4580) (0xc00047a8c0) Stream removed, broadcasting: 1\nI0124 01:27:07.268672    4291 log.go:172] (0xc0000f4580) Go away received\nI0124 01:27:07.269446    4291 log.go:172] (0xc0000f4580) (0xc00047a8c0) Stream removed, broadcasting: 1\nI0124 01:27:07.269458    4291 log.go:172] (0xc0000f4580) (0xc00057ee60) Stream removed, broadcasting: 3\nI0124 01:27:07.269462    4291 log.go:172] (0xc0000f4580) (0xc0005afe00) Stream removed, broadcasting: 5\n"
Jan 24 01:27:07.276: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jan 24 01:27:07.276: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jan 24 01:27:07.281: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Jan 24 01:27:17.287: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan 24 01:27:17.287: INFO: Waiting for statefulset status.replicas updated to 0
Jan 24 01:27:17.332: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999579s
Jan 24 01:27:18.340: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.971020125s
Jan 24 01:27:19.345: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.963253529s
Jan 24 01:27:20.351: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.957349474s
Jan 24 01:27:21.356: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.95135565s
Jan 24 01:27:22.363: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.946356642s
Jan 24 01:27:23.372: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.939296152s
Jan 24 01:27:24.378: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.930555839s
Jan 24 01:27:25.385: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.924535631s
Jan 24 01:27:26.390: INFO: Verifying statefulset ss doesn't scale past 1 for another 917.536898ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-8448
Jan 24 01:27:27.394: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 24 01:27:27.752: INFO: stderr: "I0124 01:27:27.545335    4314 log.go:172] (0xc0008d4580) (0xc000299540) Create stream\nI0124 01:27:27.545549    4314 log.go:172] (0xc0008d4580) (0xc000299540) Stream added, broadcasting: 1\nI0124 01:27:27.549442    4314 log.go:172] (0xc0008d4580) Reply frame received for 1\nI0124 01:27:27.549504    4314 log.go:172] (0xc0008d4580) (0xc0005ddc20) Create stream\nI0124 01:27:27.549530    4314 log.go:172] (0xc0008d4580) (0xc0005ddc20) Stream added, broadcasting: 3\nI0124 01:27:27.552100    4314 log.go:172] (0xc0008d4580) Reply frame received for 3\nI0124 01:27:27.552138    4314 log.go:172] (0xc0008d4580) (0xc0009fa000) Create stream\nI0124 01:27:27.552152    4314 log.go:172] (0xc0008d4580) (0xc0009fa000) Stream added, broadcasting: 5\nI0124 01:27:27.555957    4314 log.go:172] (0xc0008d4580) Reply frame received for 5\nI0124 01:27:27.662640    4314 log.go:172] (0xc0008d4580) Data frame received for 5\nI0124 01:27:27.662700    4314 log.go:172] (0xc0009fa000) (5) Data frame handling\nI0124 01:27:27.662724    4314 log.go:172] (0xc0009fa000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0124 01:27:27.663211    4314 log.go:172] (0xc0008d4580) Data frame received for 3\nI0124 01:27:27.663228    4314 log.go:172] (0xc0005ddc20) (3) Data frame handling\nI0124 01:27:27.663247    4314 log.go:172] (0xc0005ddc20) (3) Data frame sent\nI0124 01:27:27.744610    4314 log.go:172] (0xc0008d4580) Data frame received for 1\nI0124 01:27:27.744660    4314 log.go:172] (0xc0008d4580) (0xc0009fa000) Stream removed, broadcasting: 5\nI0124 01:27:27.744687    4314 log.go:172] (0xc000299540) (1) Data frame handling\nI0124 01:27:27.744694    4314 log.go:172] (0xc000299540) (1) Data frame sent\nI0124 01:27:27.744710    4314 log.go:172] (0xc0008d4580) (0xc0005ddc20) Stream removed, broadcasting: 3\nI0124 01:27:27.744730    4314 log.go:172] (0xc0008d4580) (0xc000299540) Stream removed, broadcasting: 1\nI0124 01:27:27.744739    4314 log.go:172] (0xc0008d4580) Go away received\nI0124 01:27:27.745543    4314 log.go:172] (0xc0008d4580) (0xc000299540) Stream removed, broadcasting: 1\nI0124 01:27:27.745553    4314 log.go:172] (0xc0008d4580) (0xc0005ddc20) Stream removed, broadcasting: 3\nI0124 01:27:27.745557    4314 log.go:172] (0xc0008d4580) (0xc0009fa000) Stream removed, broadcasting: 5\n"
Jan 24 01:27:27.752: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jan 24 01:27:27.752: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jan 24 01:27:27.775: INFO: Found 2 stateful pods, waiting for 3
Jan 24 01:27:37.783: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 24 01:27:37.783: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 24 01:27:37.783: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan 24 01:27:47.783: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 24 01:27:47.783: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 24 01:27:47.783: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Jan 24 01:27:47.795: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8448 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jan 24 01:27:48.173: INFO: stderr: "I0124 01:27:48.024863    4337 log.go:172] (0xc000b160b0) (0xc00063a140) Create stream\nI0124 01:27:48.024964    4337 log.go:172] (0xc000b160b0) (0xc00063a140) Stream added, broadcasting: 1\nI0124 01:27:48.027221    4337 log.go:172] (0xc000b160b0) Reply frame received for 1\nI0124 01:27:48.027257    4337 log.go:172] (0xc000b160b0) (0xc00063a1e0) Create stream\nI0124 01:27:48.027270    4337 log.go:172] (0xc000b160b0) (0xc00063a1e0) Stream added, broadcasting: 3\nI0124 01:27:48.028683    4337 log.go:172] (0xc000b160b0) Reply frame received for 3\nI0124 01:27:48.028744    4337 log.go:172] (0xc000b160b0) (0xc00063a280) Create stream\nI0124 01:27:48.028768    4337 log.go:172] (0xc000b160b0) (0xc00063a280) Stream added, broadcasting: 5\nI0124 01:27:48.030506    4337 log.go:172] (0xc000b160b0) Reply frame received for 5\nI0124 01:27:48.103184    4337 log.go:172] (0xc000b160b0) Data frame received for 5\nI0124 01:27:48.103203    4337 log.go:172] (0xc00063a280) (5) Data frame handling\nI0124 01:27:48.103212    4337 log.go:172] (0xc00063a280) (5) Data frame sent\nI0124 01:27:48.103216    4337 log.go:172] (0xc000b160b0) Data frame received for 5\nI0124 01:27:48.103220    4337 log.go:172] (0xc00063a280) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0124 01:27:48.103240    4337 log.go:172] (0xc00063a280) (5) Data frame sent\nI0124 01:27:48.104879    4337 log.go:172] (0xc000b160b0) Data frame received for 3\nI0124 01:27:48.104914    4337 log.go:172] (0xc00063a1e0) (3) Data frame handling\nI0124 01:27:48.104925    4337 log.go:172] (0xc00063a1e0) (3) Data frame sent\nI0124 01:27:48.166821    4337 log.go:172] (0xc000b160b0) (0xc00063a1e0) Stream removed, broadcasting: 3\nI0124 01:27:48.166999    4337 log.go:172] (0xc000b160b0) Data frame received for 1\nI0124 01:27:48.167018    4337 log.go:172] (0xc00063a140) (1) Data frame handling\nI0124 01:27:48.167027    4337 log.go:172] (0xc00063a140) (1) Data frame sent\nI0124 01:27:48.167034    4337 log.go:172] (0xc000b160b0) (0xc00063a140) Stream removed, broadcasting: 1\nI0124 01:27:48.167268    4337 log.go:172] (0xc000b160b0) (0xc00063a280) Stream removed, broadcasting: 5\nI0124 01:27:48.167291    4337 log.go:172] (0xc000b160b0) (0xc00063a140) Stream removed, broadcasting: 1\nI0124 01:27:48.167299    4337 log.go:172] (0xc000b160b0) (0xc00063a1e0) Stream removed, broadcasting: 3\nI0124 01:27:48.167304    4337 log.go:172] (0xc000b160b0) (0xc00063a280) Stream removed, broadcasting: 5\n"
Jan 24 01:27:48.173: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jan 24 01:27:48.173: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jan 24 01:27:48.173: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8448 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jan 24 01:27:48.559: INFO: stderr: "I0124 01:27:48.317947    4356 log.go:172] (0xc000a28160) (0xc000c08320) Create stream\nI0124 01:27:48.318079    4356 log.go:172] (0xc000a28160) (0xc000c08320) Stream added, broadcasting: 1\nI0124 01:27:48.321189    4356 log.go:172] (0xc000a28160) Reply frame received for 1\nI0124 01:27:48.321214    4356 log.go:172] (0xc000a28160) (0xc000a3a0a0) Create stream\nI0124 01:27:48.321220    4356 log.go:172] (0xc000a28160) (0xc000a3a0a0) Stream added, broadcasting: 3\nI0124 01:27:48.322303    4356 log.go:172] (0xc000a28160) Reply frame received for 3\nI0124 01:27:48.322318    4356 log.go:172] (0xc000a28160) (0xc000c083c0) Create stream\nI0124 01:27:48.322323    4356 log.go:172] (0xc000a28160) (0xc000c083c0) Stream added, broadcasting: 5\nI0124 01:27:48.323384    4356 log.go:172] (0xc000a28160) Reply frame received for 5\nI0124 01:27:48.392188    4356 log.go:172] (0xc000a28160) Data frame received for 5\nI0124 01:27:48.392235    4356 log.go:172] (0xc000c083c0) (5) Data frame handling\nI0124 01:27:48.392254    4356 log.go:172] (0xc000c083c0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0124 01:27:48.415743    4356 log.go:172] (0xc000a28160) Data frame received for 3\nI0124 01:27:48.415771    4356 log.go:172] (0xc000a3a0a0) (3) Data frame handling\nI0124 01:27:48.415786    4356 log.go:172] (0xc000a3a0a0) (3) Data frame sent\nI0124 01:27:48.541162    4356 log.go:172] (0xc000a28160) Data frame received for 1\nI0124 01:27:48.541400    4356 log.go:172] (0xc000c08320) (1) Data frame handling\nI0124 01:27:48.541443    4356 log.go:172] (0xc000c08320) (1) Data frame sent\nI0124 01:27:48.541485    4356 log.go:172] (0xc000a28160) (0xc000c08320) Stream removed, broadcasting: 1\nI0124 01:27:48.542200    4356 log.go:172] (0xc000a28160) (0xc000a3a0a0) Stream removed, broadcasting: 3\nI0124 01:27:48.542691    4356 log.go:172] (0xc000a28160) (0xc000c083c0) Stream removed, broadcasting: 5\nI0124 01:27:48.542765    4356 log.go:172] (0xc000a28160) (0xc000c08320) Stream removed, broadcasting: 1\nI0124 01:27:48.542777    4356 log.go:172] (0xc000a28160) (0xc000a3a0a0) Stream removed, broadcasting: 3\nI0124 01:27:48.542786    4356 log.go:172] (0xc000a28160) (0xc000c083c0) Stream removed, broadcasting: 5\n"
Jan 24 01:27:48.559: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jan 24 01:27:48.559: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jan 24 01:27:48.559: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8448 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jan 24 01:27:49.273: INFO: stderr: "I0124 01:27:49.043025    4376 log.go:172] (0xc0006de370) (0xc00079e000) Create stream\nI0124 01:27:49.043175    4376 log.go:172] (0xc0006de370) (0xc00079e000) Stream added, broadcasting: 1\nI0124 01:27:49.047241    4376 log.go:172] (0xc0006de370) Reply frame received for 1\nI0124 01:27:49.047272    4376 log.go:172] (0xc0006de370) (0xc0008f3360) Create stream\nI0124 01:27:49.047282    4376 log.go:172] (0xc0006de370) (0xc0008f3360) Stream added, broadcasting: 3\nI0124 01:27:49.049226    4376 log.go:172] (0xc0006de370) Reply frame received for 3\nI0124 01:27:49.049267    4376 log.go:172] (0xc0006de370) (0xc00079fd60) Create stream\nI0124 01:27:49.049274    4376 log.go:172] (0xc0006de370) (0xc00079fd60) Stream added, broadcasting: 5\nI0124 01:27:49.050532    4376 log.go:172] (0xc0006de370) Reply frame received for 5\nI0124 01:27:49.150002    4376 log.go:172] (0xc0006de370) Data frame received for 5\nI0124 01:27:49.150062    4376 log.go:172] (0xc00079fd60) (5) Data frame handling\nI0124 01:27:49.150083    4376 log.go:172] (0xc00079fd60) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0124 01:27:49.186327    4376 log.go:172] (0xc0006de370) Data frame received for 3\nI0124 01:27:49.186399    4376 log.go:172] (0xc0008f3360) (3) Data frame handling\nI0124 01:27:49.186422    4376 log.go:172] (0xc0008f3360) (3) Data frame sent\nI0124 01:27:49.265559    4376 log.go:172] (0xc0006de370) (0xc0008f3360) Stream removed, broadcasting: 3\nI0124 01:27:49.265638    4376 log.go:172] (0xc0006de370) Data frame received for 1\nI0124 01:27:49.265660    4376 log.go:172] (0xc00079e000) (1) Data frame handling\nI0124 01:27:49.265683    4376 log.go:172] (0xc00079e000) (1) Data frame sent\nI0124 01:27:49.265699    4376 log.go:172] (0xc0006de370) (0xc00079fd60) Stream removed, broadcasting: 5\nI0124 01:27:49.265723    4376 log.go:172] (0xc0006de370) (0xc00079e000) Stream removed, broadcasting: 1\nI0124 01:27:49.265733    4376 log.go:172] (0xc0006de370) Go away received\nI0124 01:27:49.266218    4376 log.go:172] (0xc0006de370) (0xc00079e000) Stream removed, broadcasting: 1\nI0124 01:27:49.266232    4376 log.go:172] (0xc0006de370) (0xc0008f3360) Stream removed, broadcasting: 3\nI0124 01:27:49.266238    4376 log.go:172] (0xc0006de370) (0xc00079fd60) Stream removed, broadcasting: 5\n"
Jan 24 01:27:49.273: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jan 24 01:27:49.273: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jan 24 01:27:49.273: INFO: Waiting for statefulset status.replicas updated to 0
Jan 24 01:27:49.278: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Jan 24 01:27:59.293: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan 24 01:27:59.293: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Jan 24 01:27:59.293: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Jan 24 01:27:59.307: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999526s
Jan 24 01:28:00.318: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.994939425s
Jan 24 01:28:01.327: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.983875358s
Jan 24 01:28:02.335: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.974806662s
Jan 24 01:28:03.798: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.967317764s
Jan 24 01:28:04.816: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.504184208s
Jan 24 01:28:05.861: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.486049083s
Jan 24 01:28:06.868: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.441248256s
Jan 24 01:28:07.876: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.434569131s
Jan 24 01:28:08.883: INFO: Verifying statefulset ss doesn't scale past 3 for another 426.335524ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-8448
Jan 24 01:28:09.891: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8448 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 24 01:28:10.302: INFO: stderr: "I0124 01:28:10.124754    4394 log.go:172] (0xc000578840) (0xc000707f40) Create stream\nI0124 01:28:10.124858    4394 log.go:172] (0xc000578840) (0xc000707f40) Stream added, broadcasting: 1\nI0124 01:28:10.127883    4394 log.go:172] (0xc000578840) Reply frame received for 1\nI0124 01:28:10.127917    4394 log.go:172] (0xc000578840) (0xc000950000) Create stream\nI0124 01:28:10.127925    4394 log.go:172] (0xc000578840) (0xc000950000) Stream added, broadcasting: 3\nI0124 01:28:10.129341    4394 log.go:172] (0xc000578840) Reply frame received for 3\nI0124 01:28:10.129382    4394 log.go:172] (0xc000578840) (0xc0006ca780) Create stream\nI0124 01:28:10.129395    4394 log.go:172] (0xc000578840) (0xc0006ca780) Stream added, broadcasting: 5\nI0124 01:28:10.130863    4394 log.go:172] (0xc000578840) Reply frame received for 5\nI0124 01:28:10.209943    4394 log.go:172] (0xc000578840) Data frame received for 5\nI0124 01:28:10.209979    4394 log.go:172] (0xc0006ca780) (5) Data frame handling\nI0124 01:28:10.209996    4394 log.go:172] (0xc0006ca780) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0124 01:28:10.212526    4394 log.go:172] (0xc000578840) Data frame received for 3\nI0124 01:28:10.212556    4394 log.go:172] (0xc000950000) (3) Data frame handling\nI0124 01:28:10.212575    4394 log.go:172] (0xc000950000) (3) Data frame sent\nI0124 01:28:10.290701    4394 log.go:172] (0xc000578840) (0xc0006ca780) Stream removed, broadcasting: 5\nI0124 01:28:10.290853    4394 log.go:172] (0xc000578840) Data frame received for 1\nI0124 01:28:10.290878    4394 log.go:172] (0xc000578840) (0xc000950000) Stream removed, broadcasting: 3\nI0124 01:28:10.290990    4394 log.go:172] (0xc000707f40) (1) Data frame handling\nI0124 01:28:10.291071    4394 log.go:172] (0xc000707f40) (1) Data frame sent\nI0124 01:28:10.291132    4394 log.go:172] (0xc000578840) (0xc000707f40) Stream removed, broadcasting: 1\nI0124 01:28:10.291207    4394 log.go:172] (0xc000578840) Go away received\nI0124 01:28:10.292175    4394 log.go:172] (0xc000578840) (0xc000707f40) Stream removed, broadcasting: 1\nI0124 01:28:10.292198    4394 log.go:172] (0xc000578840) (0xc000950000) Stream removed, broadcasting: 3\nI0124 01:28:10.292209    4394 log.go:172] (0xc000578840) (0xc0006ca780) Stream removed, broadcasting: 5\n"
Jan 24 01:28:10.302: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jan 24 01:28:10.302: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jan 24 01:28:10.303: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8448 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 24 01:28:10.718: INFO: stderr: "I0124 01:28:10.463040    4416 log.go:172] (0xc0001116b0) (0xc0006c9d60) Create stream\nI0124 01:28:10.463227    4416 log.go:172] (0xc0001116b0) (0xc0006c9d60) Stream added, broadcasting: 1\nI0124 01:28:10.468762    4416 log.go:172] (0xc0001116b0) Reply frame received for 1\nI0124 01:28:10.468860    4416 log.go:172] (0xc0001116b0) (0xc0008f8000) Create stream\nI0124 01:28:10.468871    4416 log.go:172] (0xc0001116b0) (0xc0008f8000) Stream added, broadcasting: 3\nI0124 01:28:10.470199    4416 log.go:172] (0xc0001116b0) Reply frame received for 3\nI0124 01:28:10.470232    4416 log.go:172] (0xc0001116b0) (0xc000407360) Create stream\nI0124 01:28:10.470247    4416 log.go:172] (0xc0001116b0) (0xc000407360) Stream added, broadcasting: 5\nI0124 01:28:10.471185    4416 log.go:172] (0xc0001116b0) Reply frame received for 5\nI0124 01:28:10.577374    4416 log.go:172] (0xc0001116b0) Data frame received for 5\nI0124 01:28:10.577519    4416 log.go:172] (0xc000407360) (5) Data frame handling\nI0124 01:28:10.577555    4416 log.go:172] (0xc000407360) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0124 01:28:10.577607    4416 log.go:172] (0xc0001116b0) Data frame received for 3\nI0124 01:28:10.577643    4416 log.go:172] (0xc0008f8000) (3) Data frame handling\nI0124 01:28:10.577654    4416 log.go:172] (0xc0008f8000) (3) Data frame sent\nI0124 01:28:10.709480    4416 log.go:172] (0xc0001116b0) Data frame received for 1\nI0124 01:28:10.709893    4416 log.go:172] (0xc0001116b0) (0xc0008f8000) Stream removed, broadcasting: 3\nI0124 01:28:10.709997    4416 log.go:172] (0xc0006c9d60) (1) Data frame handling\nI0124 01:28:10.710016    4416 log.go:172] (0xc0006c9d60) (1) Data frame sent\nI0124 01:28:10.710044    4416 log.go:172] (0xc0001116b0) (0xc000407360) Stream removed, broadcasting: 5\nI0124 01:28:10.710069    4416 log.go:172] (0xc0001116b0) (0xc0006c9d60) Stream removed, broadcasting: 1\nI0124 01:28:10.710087    4416 log.go:172] (0xc0001116b0) Go away received\nI0124 01:28:10.710688    4416 log.go:172] (0xc0001116b0) (0xc0006c9d60) Stream removed, broadcasting: 1\nI0124 01:28:10.710745    4416 log.go:172] (0xc0001116b0) (0xc0008f8000) Stream removed, broadcasting: 3\nI0124 01:28:10.710752    4416 log.go:172] (0xc0001116b0) (0xc000407360) Stream removed, broadcasting: 5\n"
Jan 24 01:28:10.718: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jan 24 01:28:10.718: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jan 24 01:28:10.719: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8448 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 24 01:28:11.047: INFO: rc: 126
Jan 24 01:28:11.047: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8448 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:
cannot exec in a stopped state: unknown

stderr:
I0124 01:28:11.022448    4439 log.go:172] (0xc0000f5130) (0xc000816000) Create stream
I0124 01:28:11.022738    4439 log.go:172] (0xc0000f5130) (0xc000816000) Stream added, broadcasting: 1
I0124 01:28:11.024788    4439 log.go:172] (0xc0000f5130) Reply frame received for 1
I0124 01:28:11.024825    4439 log.go:172] (0xc0000f5130) (0xc000747180) Create stream
I0124 01:28:11.024842    4439 log.go:172] (0xc0000f5130) (0xc000747180) Stream added, broadcasting: 3
I0124 01:28:11.025906    4439 log.go:172] (0xc0000f5130) Reply frame received for 3
I0124 01:28:11.025925    4439 log.go:172] (0xc0000f5130) (0xc000a6a000) Create stream
I0124 01:28:11.025940    4439 log.go:172] (0xc0000f5130) (0xc000a6a000) Stream added, broadcasting: 5
I0124 01:28:11.027208    4439 log.go:172] (0xc0000f5130) Reply frame received for 5
I0124 01:28:11.038084    4439 log.go:172] (0xc0000f5130) Data frame received for 3
I0124 01:28:11.038120    4439 log.go:172] (0xc000747180) (3) Data frame handling
I0124 01:28:11.038151    4439 log.go:172] (0xc000747180) (3) Data frame sent
I0124 01:28:11.039335    4439 log.go:172] (0xc0000f5130) Data frame received for 1
I0124 01:28:11.039355    4439 log.go:172] (0xc000816000) (1) Data frame handling
I0124 01:28:11.039366    4439 log.go:172] (0xc000816000) (1) Data frame sent
I0124 01:28:11.039379    4439 log.go:172] (0xc0000f5130) (0xc000816000) Stream removed, broadcasting: 1
I0124 01:28:11.039630    4439 log.go:172] (0xc0000f5130) (0xc000747180) Stream removed, broadcasting: 3
I0124 01:28:11.039721    4439 log.go:172] (0xc0000f5130) (0xc000a6a000) Stream removed, broadcasting: 5
I0124 01:28:11.039918    4439 log.go:172] (0xc0000f5130) Go away received
I0124 01:28:11.039982    4439 log.go:172] (0xc0000f5130) (0xc000816000) Stream removed, broadcasting: 1
I0124 01:28:11.039997    4439 log.go:172] (0xc0000f5130) (0xc000747180) Stream removed, broadcasting: 3
I0124 01:28:11.040022    4439 log.go:172] (0xc0000f5130) (0xc000a6a000) Stream removed, broadcasting: 5
command terminated with exit code 126

error:
exit status 126
Jan 24 01:28:21.048: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8448 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 24 01:28:21.258: INFO: rc: 1
Jan 24 01:28:21.258: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8448 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("webserver")

error:
exit status 1
Jan 24 01:28:31.259: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8448 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 24 01:28:31.375: INFO: rc: 1
Jan 24 01:28:31.375: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8448 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 24 01:28:41.375: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8448 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 24 01:28:41.547: INFO: rc: 1
Jan 24 01:28:41.547: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8448 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 24 01:28:51.548: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8448 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 24 01:28:51.715: INFO: rc: 1
Jan 24 01:28:51.715: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8448 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 24 01:29:01.716: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8448 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 24 01:29:01.855: INFO: rc: 1
Jan 24 01:29:01.855: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8448 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 24 01:29:11.856: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8448 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 24 01:29:12.008: INFO: rc: 1
Jan 24 01:29:12.008: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8448 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 24 01:29:22.008: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8448 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 24 01:29:22.140: INFO: rc: 1
Jan 24 01:29:22.140: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8448 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 24 01:29:32.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8448 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 24 01:29:32.303: INFO: rc: 1
Jan 24 01:29:32.303: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8448 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 24 01:29:42.304: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8448 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 24 01:29:42.475: INFO: rc: 1
Jan 24 01:29:42.476: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8448 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 24 01:29:52.476: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8448 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 24 01:29:52.612: INFO: rc: 1
Jan 24 01:29:52.612: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8448 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 24 01:30:02.614: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8448 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 24 01:30:02.738: INFO: rc: 1
Jan 24 01:30:02.739: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8448 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 24 01:30:12.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8448 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 24 01:30:12.865: INFO: rc: 1
Jan 24 01:30:12.865: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8448 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 24 01:30:22.865: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8448 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 24 01:30:22.988: INFO: rc: 1
Jan 24 01:30:22.988: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8448 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 24 01:30:32.989: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8448 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 24 01:30:33.105: INFO: rc: 1
Jan 24 01:30:33.105: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8448 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 24 01:30:43.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8448 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 24 01:30:43.248: INFO: rc: 1
Jan 24 01:30:43.248: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8448 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 24 01:30:53.248: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8448 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 24 01:30:53.381: INFO: rc: 1
Jan 24 01:30:53.381: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8448 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 24 01:31:03.382: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8448 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 24 01:31:03.532: INFO: rc: 1
Jan 24 01:31:03.532: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8448 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 24 01:31:13.533: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8448 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 24 01:31:13.733: INFO: rc: 1
Jan 24 01:31:13.733: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8448 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 24 01:31:23.733: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8448 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 24 01:31:23.902: INFO: rc: 1
Jan 24 01:31:23.902: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8448 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 24 01:31:33.903: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8448 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 24 01:31:34.004: INFO: rc: 1
Jan 24 01:31:34.004: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8448 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 24 01:31:44.005: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8448 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 24 01:31:44.123: INFO: rc: 1
Jan 24 01:31:44.123: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8448 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 24 01:31:54.123: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8448 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 24 01:31:54.221: INFO: rc: 1
Jan 24 01:31:54.221: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8448 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 24 01:32:04.221: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8448 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 24 01:32:04.354: INFO: rc: 1
Jan 24 01:32:04.354: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8448 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 24 01:32:14.354: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8448 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 24 01:32:14.543: INFO: rc: 1
Jan 24 01:32:14.543: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8448 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 24 01:32:24.544: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8448 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 24 01:32:24.716: INFO: rc: 1
Jan 24 01:32:24.716: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8448 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 24 01:32:34.716: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8448 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 24 01:32:34.881: INFO: rc: 1
Jan 24 01:32:34.881: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8448 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 24 01:32:44.882: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8448 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 24 01:32:44.989: INFO: rc: 1
Jan 24 01:32:44.989: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8448 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 24 01:32:54.990: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8448 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 24 01:32:55.136: INFO: rc: 1
Jan 24 01:32:55.136: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8448 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 24 01:33:05.136: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8448 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 24 01:33:05.294: INFO: rc: 1
Jan 24 01:33:05.294: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8448 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 24 01:33:15.295: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8448 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 24 01:33:15.476: INFO: rc: 1
Jan 24 01:33:15.476: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: 
Jan 24 01:33:15.476: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Jan 24 01:33:15.493: INFO: Deleting all statefulset in ns statefulset-8448
Jan 24 01:33:15.496: INFO: Scaling statefulset ss to 0
Jan 24 01:33:15.505: INFO: Waiting for statefulset status.replicas updated to 0
Jan 24 01:33:15.508: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 24 01:33:15.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-8448" for this suite.

• [SLOW TEST:381.143 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":278,"completed":268,"skipped":4379,"failed":0}
S
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 24 01:33:15.547: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test override all
Jan 24 01:33:15.650: INFO: Waiting up to 5m0s for pod "client-containers-5deb1ed7-645a-4052-abba-8873d4c52461" in namespace "containers-5801" to be "success or failure"
Jan 24 01:33:15.779: INFO: Pod "client-containers-5deb1ed7-645a-4052-abba-8873d4c52461": Phase="Pending", Reason="", readiness=false. Elapsed: 128.949928ms
Jan 24 01:33:17.785: INFO: Pod "client-containers-5deb1ed7-645a-4052-abba-8873d4c52461": Phase="Pending", Reason="", readiness=false. Elapsed: 2.135086396s
Jan 24 01:33:19.794: INFO: Pod "client-containers-5deb1ed7-645a-4052-abba-8873d4c52461": Phase="Pending", Reason="", readiness=false. Elapsed: 4.143869781s
Jan 24 01:33:21.803: INFO: Pod "client-containers-5deb1ed7-645a-4052-abba-8873d4c52461": Phase="Pending", Reason="", readiness=false. Elapsed: 6.152802336s
Jan 24 01:33:23.809: INFO: Pod "client-containers-5deb1ed7-645a-4052-abba-8873d4c52461": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.158596078s
STEP: Saw pod success
Jan 24 01:33:23.809: INFO: Pod "client-containers-5deb1ed7-645a-4052-abba-8873d4c52461" satisfied condition "success or failure"
Jan 24 01:33:23.812: INFO: Trying to get logs from node jerma-node pod client-containers-5deb1ed7-645a-4052-abba-8873d4c52461 container test-container: 
STEP: delete the pod
Jan 24 01:33:23.895: INFO: Waiting for pod client-containers-5deb1ed7-645a-4052-abba-8873d4c52461 to disappear
Jan 24 01:33:23.927: INFO: Pod client-containers-5deb1ed7-645a-4052-abba-8873d4c52461 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 24 01:33:23.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-5801" for this suite.

• [SLOW TEST:8.402 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":278,"completed":269,"skipped":4380,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 24 01:33:23.951: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir volume type on tmpfs
Jan 24 01:33:24.061: INFO: Waiting up to 5m0s for pod "pod-8f35a277-8159-4176-9ea7-f8088c099141" in namespace "emptydir-1956" to be "success or failure"
Jan 24 01:33:24.104: INFO: Pod "pod-8f35a277-8159-4176-9ea7-f8088c099141": Phase="Pending", Reason="", readiness=false. Elapsed: 42.73258ms
Jan 24 01:33:26.114: INFO: Pod "pod-8f35a277-8159-4176-9ea7-f8088c099141": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052968442s
Jan 24 01:33:28.121: INFO: Pod "pod-8f35a277-8159-4176-9ea7-f8088c099141": Phase="Pending", Reason="", readiness=false. Elapsed: 4.059610789s
Jan 24 01:33:30.127: INFO: Pod "pod-8f35a277-8159-4176-9ea7-f8088c099141": Phase="Pending", Reason="", readiness=false. Elapsed: 6.065352708s
Jan 24 01:33:32.133: INFO: Pod "pod-8f35a277-8159-4176-9ea7-f8088c099141": Phase="Running", Reason="", readiness=true. Elapsed: 8.071800517s
Jan 24 01:33:34.138: INFO: Pod "pod-8f35a277-8159-4176-9ea7-f8088c099141": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.076838164s
STEP: Saw pod success
Jan 24 01:33:34.138: INFO: Pod "pod-8f35a277-8159-4176-9ea7-f8088c099141" satisfied condition "success or failure"
Jan 24 01:33:34.141: INFO: Trying to get logs from node jerma-node pod pod-8f35a277-8159-4176-9ea7-f8088c099141 container test-container: 
STEP: delete the pod
Jan 24 01:33:34.281: INFO: Waiting for pod pod-8f35a277-8159-4176-9ea7-f8088c099141 to disappear
Jan 24 01:33:34.290: INFO: Pod pod-8f35a277-8159-4176-9ea7-f8088c099141 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 24 01:33:34.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1956" for this suite.

• [SLOW TEST:10.358 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":270,"skipped":4429,"failed":0}
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl version 
  should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 24 01:33:34.310: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:279
[It] should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 24 01:33:34.391: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Jan 24 01:33:34.586: INFO: stderr: ""
Jan 24 01:33:34.586: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"18+\", GitVersion:\"v1.18.0-alpha.1.106+4f70231ce7736c\", GitCommit:\"4f70231ce7736cc748f76526c98955f86c667a41\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T17:08:54Z\", GoVersion:\"go1.13.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.0\", GitCommit:\"70132b0f130acc0bed193d9ba59dd186f0e634cf\", GitTreeState:\"clean\", BuildDate:\"2019-12-07T21:12:17Z\", GoVersion:\"go1.13.4\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 24 01:33:34.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5513" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed  [Conformance]","total":278,"completed":271,"skipped":4438,"failed":0}
SSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 24 01:33:34.600: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 24 01:33:34.754: INFO: Create a RollingUpdate DaemonSet
Jan 24 01:33:34.796: INFO: Check that daemon pods launch on every node of the cluster
Jan 24 01:33:34.820: INFO: Number of nodes with available pods: 0
Jan 24 01:33:34.820: INFO: Node jerma-node is running more than one daemon pod
Jan 24 01:33:35.841: INFO: Number of nodes with available pods: 0
Jan 24 01:33:35.842: INFO: Node jerma-node is running more than one daemon pod
Jan 24 01:33:36.949: INFO: Number of nodes with available pods: 0
Jan 24 01:33:36.949: INFO: Node jerma-node is running more than one daemon pod
Jan 24 01:33:37.832: INFO: Number of nodes with available pods: 0
Jan 24 01:33:37.832: INFO: Node jerma-node is running more than one daemon pod
Jan 24 01:33:38.829: INFO: Number of nodes with available pods: 0
Jan 24 01:33:38.830: INFO: Node jerma-node is running more than one daemon pod
Jan 24 01:33:40.360: INFO: Number of nodes with available pods: 0
Jan 24 01:33:40.360: INFO: Node jerma-node is running more than one daemon pod
Jan 24 01:33:40.992: INFO: Number of nodes with available pods: 0
Jan 24 01:33:40.993: INFO: Node jerma-node is running more than one daemon pod
Jan 24 01:33:42.393: INFO: Number of nodes with available pods: 0
Jan 24 01:33:42.393: INFO: Node jerma-node is running more than one daemon pod
Jan 24 01:33:42.948: INFO: Number of nodes with available pods: 0
Jan 24 01:33:42.948: INFO: Node jerma-node is running more than one daemon pod
Jan 24 01:33:43.835: INFO: Number of nodes with available pods: 2
Jan 24 01:33:43.835: INFO: Number of running nodes: 2, number of available pods: 2
Jan 24 01:33:43.835: INFO: Update the DaemonSet to trigger a rollout
Jan 24 01:33:43.841: INFO: Updating DaemonSet daemon-set
Jan 24 01:33:54.136: INFO: Roll back the DaemonSet before rollout is complete
Jan 24 01:33:54.152: INFO: Updating DaemonSet daemon-set
Jan 24 01:33:54.152: INFO: Make sure DaemonSet rollback is complete
Jan 24 01:33:54.346: INFO: Wrong image for pod: daemon-set-t2ds4. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Jan 24 01:33:54.346: INFO: Pod daemon-set-t2ds4 is not available
Jan 24 01:33:55.403: INFO: Wrong image for pod: daemon-set-t2ds4. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Jan 24 01:33:55.403: INFO: Pod daemon-set-t2ds4 is not available
Jan 24 01:33:56.402: INFO: Wrong image for pod: daemon-set-t2ds4. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Jan 24 01:33:56.402: INFO: Pod daemon-set-t2ds4 is not available
Jan 24 01:33:57.394: INFO: Wrong image for pod: daemon-set-t2ds4. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Jan 24 01:33:57.394: INFO: Pod daemon-set-t2ds4 is not available
Jan 24 01:33:59.078: INFO: Wrong image for pod: daemon-set-t2ds4. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Jan 24 01:33:59.079: INFO: Pod daemon-set-t2ds4 is not available
Jan 24 01:33:59.421: INFO: Wrong image for pod: daemon-set-t2ds4. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Jan 24 01:33:59.421: INFO: Pod daemon-set-t2ds4 is not available
Jan 24 01:34:00.395: INFO: Pod daemon-set-xtgqt is not available
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6430, will wait for the garbage collector to delete the pods
Jan 24 01:34:00.475: INFO: Deleting DaemonSet.extensions daemon-set took: 12.144609ms
Jan 24 01:34:00.976: INFO: Terminating DaemonSet.extensions daemon-set pods took: 500.552965ms
Jan 24 01:34:12.380: INFO: Number of nodes with available pods: 0
Jan 24 01:34:12.381: INFO: Number of running nodes: 0, number of available pods: 0
Jan 24 01:34:12.384: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6430/daemonsets","resourceVersion":"3931689"},"items":null}

Jan 24 01:34:12.388: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6430/pods","resourceVersion":"3931689"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 24 01:34:12.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-6430" for this suite.

• [SLOW TEST:37.885 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":278,"completed":272,"skipped":4442,"failed":0}
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 24 01:34:12.486: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jan 24 01:34:12.569: INFO: Waiting up to 5m0s for pod "pod-8ec9af9d-b36f-4fcf-a119-36feda64b8d8" in namespace "emptydir-2603" to be "success or failure"
Jan 24 01:34:12.580: INFO: Pod "pod-8ec9af9d-b36f-4fcf-a119-36feda64b8d8": Phase="Pending", Reason="", readiness=false. Elapsed: 11.375448ms
Jan 24 01:34:14.591: INFO: Pod "pod-8ec9af9d-b36f-4fcf-a119-36feda64b8d8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022115835s
Jan 24 01:34:16.601: INFO: Pod "pod-8ec9af9d-b36f-4fcf-a119-36feda64b8d8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032207235s
Jan 24 01:34:18.625: INFO: Pod "pod-8ec9af9d-b36f-4fcf-a119-36feda64b8d8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.055786343s
Jan 24 01:34:20.636: INFO: Pod "pod-8ec9af9d-b36f-4fcf-a119-36feda64b8d8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.066987072s
Jan 24 01:34:22.641: INFO: Pod "pod-8ec9af9d-b36f-4fcf-a119-36feda64b8d8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.071983793s
STEP: Saw pod success
Jan 24 01:34:22.641: INFO: Pod "pod-8ec9af9d-b36f-4fcf-a119-36feda64b8d8" satisfied condition "success or failure"
Jan 24 01:34:22.644: INFO: Trying to get logs from node jerma-node pod pod-8ec9af9d-b36f-4fcf-a119-36feda64b8d8 container test-container: 
STEP: delete the pod
Jan 24 01:34:22.681: INFO: Waiting for pod pod-8ec9af9d-b36f-4fcf-a119-36feda64b8d8 to disappear
Jan 24 01:34:22.696: INFO: Pod pod-8ec9af9d-b36f-4fcf-a119-36feda64b8d8 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 24 01:34:22.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2603" for this suite.

• [SLOW TEST:10.222 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":273,"skipped":4444,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 24 01:34:22.710: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Jan 24 01:34:41.098: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 24 01:34:41.108: INFO: Pod pod-with-poststart-http-hook still exists
Jan 24 01:34:43.108: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 24 01:34:43.115: INFO: Pod pod-with-poststart-http-hook still exists
Jan 24 01:34:45.108: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 24 01:34:45.117: INFO: Pod pod-with-poststart-http-hook still exists
Jan 24 01:34:47.108: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 24 01:34:47.115: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 24 01:34:47.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-1359" for this suite.

• [SLOW TEST:24.421 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":278,"completed":274,"skipped":4460,"failed":0}
SS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 24 01:34:47.132: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating projection with secret that has name projected-secret-test-map-5578ff8a-f121-4aff-bdaa-36982af4357f
STEP: Creating a pod to test consume secrets
Jan 24 01:34:47.408: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-3633ca8a-718c-4b89-b0e1-4df8dcb54788" in namespace "projected-8403" to be "success or failure"
Jan 24 01:34:47.431: INFO: Pod "pod-projected-secrets-3633ca8a-718c-4b89-b0e1-4df8dcb54788": Phase="Pending", Reason="", readiness=false. Elapsed: 22.699765ms
Jan 24 01:34:49.436: INFO: Pod "pod-projected-secrets-3633ca8a-718c-4b89-b0e1-4df8dcb54788": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027417749s
Jan 24 01:34:51.443: INFO: Pod "pod-projected-secrets-3633ca8a-718c-4b89-b0e1-4df8dcb54788": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035226876s
Jan 24 01:34:53.451: INFO: Pod "pod-projected-secrets-3633ca8a-718c-4b89-b0e1-4df8dcb54788": Phase="Pending", Reason="", readiness=false. Elapsed: 6.042589109s
Jan 24 01:34:55.538: INFO: Pod "pod-projected-secrets-3633ca8a-718c-4b89-b0e1-4df8dcb54788": Phase="Pending", Reason="", readiness=false. Elapsed: 8.129508291s
Jan 24 01:34:57.546: INFO: Pod "pod-projected-secrets-3633ca8a-718c-4b89-b0e1-4df8dcb54788": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.137322662s
STEP: Saw pod success
Jan 24 01:34:57.546: INFO: Pod "pod-projected-secrets-3633ca8a-718c-4b89-b0e1-4df8dcb54788" satisfied condition "success or failure"
Jan 24 01:34:57.550: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-3633ca8a-718c-4b89-b0e1-4df8dcb54788 container projected-secret-volume-test: 
STEP: delete the pod
Jan 24 01:34:57.684: INFO: Waiting for pod pod-projected-secrets-3633ca8a-718c-4b89-b0e1-4df8dcb54788 to disappear
Jan 24 01:34:57.690: INFO: Pod pod-projected-secrets-3633ca8a-718c-4b89-b0e1-4df8dcb54788 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 24 01:34:57.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8403" for this suite.

• [SLOW TEST:10.575 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":275,"skipped":4462,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  listing custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 24 01:34:57.707: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] listing custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 24 01:34:57.910: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 24 01:35:04.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-8758" for this suite.

• [SLOW TEST:6.451 seconds]
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:47
    listing custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","total":278,"completed":276,"skipped":4467,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 24 01:35:04.163: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 24 01:35:09.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-9826" for this suite.

• [SLOW TEST:5.089 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":278,"completed":277,"skipped":4533,"failed":0}
SSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 24 01:35:09.252: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name configmap-test-volume-map-2fe7ee57-24a0-4b6b-8903-9cd9ad81c186
STEP: Creating a pod to test consume configMaps
Jan 24 01:35:09.507: INFO: Waiting up to 5m0s for pod "pod-configmaps-715eb3e5-93f3-4182-b523-8737a8eb8628" in namespace "configmap-7921" to be "success or failure"
Jan 24 01:35:09.516: INFO: Pod "pod-configmaps-715eb3e5-93f3-4182-b523-8737a8eb8628": Phase="Pending", Reason="", readiness=false. Elapsed: 8.992572ms
Jan 24 01:35:11.523: INFO: Pod "pod-configmaps-715eb3e5-93f3-4182-b523-8737a8eb8628": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015458505s
Jan 24 01:35:13.529: INFO: Pod "pod-configmaps-715eb3e5-93f3-4182-b523-8737a8eb8628": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021359545s
Jan 24 01:35:15.582: INFO: Pod "pod-configmaps-715eb3e5-93f3-4182-b523-8737a8eb8628": Phase="Pending", Reason="", readiness=false. Elapsed: 6.074283333s
Jan 24 01:35:17.590: INFO: Pod "pod-configmaps-715eb3e5-93f3-4182-b523-8737a8eb8628": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.082620299s
STEP: Saw pod success
Jan 24 01:35:17.590: INFO: Pod "pod-configmaps-715eb3e5-93f3-4182-b523-8737a8eb8628" satisfied condition "success or failure"
Jan 24 01:35:17.595: INFO: Trying to get logs from node jerma-node pod pod-configmaps-715eb3e5-93f3-4182-b523-8737a8eb8628 container configmap-volume-test: 
STEP: delete the pod
Jan 24 01:35:17.645: INFO: Waiting for pod pod-configmaps-715eb3e5-93f3-4182-b523-8737a8eb8628 to disappear
Jan 24 01:35:17.656: INFO: Pod pod-configmaps-715eb3e5-93f3-4182-b523-8737a8eb8628 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 24 01:35:17.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7921" for this suite.

• [SLOW TEST:8.420 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":278,"skipped":4542,"failed":0}
SSSSSSSSSSSSSSSSSSSSSJan 24 01:35:17.673: INFO: Running AfterSuite actions on all nodes
Jan 24 01:35:17.673: INFO: Running AfterSuite actions on node 1
Jan 24 01:35:17.673: INFO: Skipping dumping logs from cluster

JUnit report was created: /home/opnfv/functest/results/k8s_conformance/junit_01.xml
{"msg":"Test Suite completed","total":278,"completed":278,"skipped":4563,"failed":0}

Ran 278 of 4841 Specs in 6962.061 seconds
SUCCESS! -- 278 Passed | 0 Failed | 0 Pending | 4563 Skipped
PASS