I0125 21:09:09.861657 8 test_context.go:419] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0125 21:09:09.862667 8 e2e.go:109] Starting e2e run "fe3a7429-6068-41ef-9683-277f8aa0278c" on Ginkgo node 1 {"msg":"Test Suite starting","total":278,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1579986548 - Will randomize all specs Will run 278 of 4814 specs Jan 25 21:09:09.915: INFO: >>> kubeConfig: /root/.kube/config Jan 25 21:09:09.922: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jan 25 21:09:09.958: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jan 25 21:09:10.038: INFO: 10 / 10 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jan 25 21:09:10.038: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jan 25 21:09:10.038: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jan 25 21:09:10.057: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jan 25 21:09:10.057: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Jan 25 21:09:10.057: INFO: e2e test version: v1.17.0 Jan 25 21:09:10.060: INFO: kube-apiserver version: v1.17.0 Jan 25 21:09:10.060: INFO: >>> kubeConfig: /root/.kube/config Jan 25 21:09:10.066: INFO: Cluster IP family: ipv4 SSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 25 21:09:10.067: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container Jan 25 21:09:10.204: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Jan 25 21:09:10.206: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 25 21:09:23.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-7040" for this suite. • [SLOW TEST:13.862 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":278,"completed":1,"skipped":5,"failed":0} [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 25 21:09:23.930: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-8827 STEP: creating a selector STEP: Creating the service pods in kubernetes Jan 25 21:09:24.060: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jan 25 21:09:58.214: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.44.0.1:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8827 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 25 21:09:58.214: INFO: >>> kubeConfig: /root/.kube/config I0125 21:09:58.289138 8 log.go:172] (0xc00154c2c0) (0xc001f7c780) Create stream I0125 21:09:58.289353 8 log.go:172] (0xc00154c2c0) (0xc001f7c780) Stream added, broadcasting: 1 I0125 21:09:58.295484 8 log.go:172] (0xc00154c2c0) Reply frame received for 1 I0125 21:09:58.295581 8 log.go:172] (0xc00154c2c0) (0xc002a03d60) Create stream I0125 21:09:58.295593 8 log.go:172] (0xc00154c2c0) (0xc002a03d60) Stream added, broadcasting: 3 I0125 21:09:58.296938 8 log.go:172] (0xc00154c2c0) Reply frame received for 3 I0125 21:09:58.296961 8 log.go:172] (0xc00154c2c0) (0xc001f7c8c0) Create stream I0125 21:09:58.296970 8 log.go:172] (0xc00154c2c0) (0xc001f7c8c0) Stream added, broadcasting: 5 I0125 21:09:58.298594 8 log.go:172] (0xc00154c2c0) Reply frame received for 5 I0125 21:09:58.418420 8 log.go:172] (0xc00154c2c0) Data frame received for 3 I0125 21:09:58.418501 8 log.go:172] (0xc002a03d60) (3) Data frame handling I0125 21:09:58.418527 8 log.go:172] (0xc002a03d60) (3) Data frame sent I0125 21:09:58.502747 8 log.go:172] (0xc00154c2c0) (0xc002a03d60) Stream removed, broadcasting: 3 I0125 21:09:58.503255 8 log.go:172] (0xc00154c2c0) Data frame received for 1 I0125 21:09:58.503268 8 log.go:172] (0xc001f7c780) (1) Data frame handling I0125 21:09:58.503291 8 log.go:172] (0xc001f7c780) (1) Data frame sent I0125 21:09:58.503302 8 log.go:172] (0xc00154c2c0) (0xc001f7c780) Stream removed, broadcasting: 1 I0125 21:09:58.504595 8 log.go:172] (0xc00154c2c0) (0xc001f7c8c0) Stream removed, broadcasting: 5 I0125 21:09:58.504687 8 log.go:172] (0xc00154c2c0) (0xc001f7c780) Stream removed, broadcasting: 1 I0125 21:09:58.504719 8 log.go:172] (0xc00154c2c0) (0xc002a03d60) Stream removed, broadcasting: 3 I0125 21:09:58.504731 8 log.go:172] (0xc00154c2c0) (0xc001f7c8c0) Stream removed, broadcasting: 5 I0125 21:09:58.504847 8 log.go:172] (0xc00154c2c0) Go away received Jan 25 21:09:58.505: INFO: Found all expected endpoints: [netserver-0] Jan 25 21:09:58.511: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8827 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 25 21:09:58.512: INFO: >>> kubeConfig: /root/.kube/config I0125 21:09:58.561443 8 log.go:172] (0xc000af26e0) (0xc001ed0000) Create stream I0125 21:09:58.561773 8 log.go:172] (0xc000af26e0) (0xc001ed0000) Stream added, broadcasting: 1 I0125 21:09:58.568860 8 log.go:172] (0xc000af26e0) Reply frame received for 1 I0125 21:09:58.568984 8 log.go:172] (0xc000af26e0) (0xc001fe0500) Create stream I0125 21:09:58.568994 8 log.go:172] (0xc000af26e0) (0xc001fe0500) Stream added, broadcasting: 3 I0125 21:09:58.570622 8 log.go:172] (0xc000af26e0) Reply frame received for 3 I0125 21:09:58.570658 8 log.go:172] (0xc000af26e0) (0xc001f7caa0) Create stream I0125 21:09:58.570681 8 log.go:172] (0xc000af26e0) (0xc001f7caa0) Stream added, broadcasting: 5 I0125 21:09:58.571842 8 log.go:172] (0xc000af26e0) Reply frame received for 5 I0125 21:09:58.651576 8 log.go:172] (0xc000af26e0) Data frame received for 3 I0125 21:09:58.651668 8 log.go:172] (0xc001fe0500) (3) Data frame handling I0125 21:09:58.651687 8 log.go:172] (0xc001fe0500) (3) Data frame sent I0125 21:09:58.716732 8 log.go:172] (0xc000af26e0) (0xc001fe0500) Stream removed, broadcasting: 3 I0125 21:09:58.717569 8 log.go:172] (0xc000af26e0) (0xc001f7caa0) Stream removed, broadcasting: 5 I0125 21:09:58.717766 8 log.go:172] (0xc000af26e0) Data frame received for 1 I0125 21:09:58.717810 8 log.go:172] (0xc001ed0000) (1) Data frame handling I0125 21:09:58.717842 8 log.go:172] (0xc001ed0000) (1) Data frame sent I0125 21:09:58.717863 8 log.go:172] (0xc000af26e0) (0xc001ed0000) Stream removed, broadcasting: 1 I0125 21:09:58.717898 8 log.go:172] (0xc000af26e0) Go away received I0125 21:09:58.718377 8 log.go:172] (0xc000af26e0) (0xc001ed0000) Stream removed, broadcasting: 1 I0125 21:09:58.718402 8 log.go:172] (0xc000af26e0) (0xc001fe0500) Stream removed, broadcasting: 3 I0125 21:09:58.718429 8 log.go:172] (0xc000af26e0) (0xc001f7caa0) Stream removed, broadcasting: 5 Jan 25 21:09:58.718: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 25 21:09:58.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-8827" for this suite. • [SLOW TEST:34.805 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":2,"skipped":5,"failed":0} S ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 25 21:09:58.735: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod test-webserver-7bb28d7a-f853-4925-9e8f-b8afba780e4e in namespace container-probe-9373 Jan 25 21:10:10.995: INFO: Started pod test-webserver-7bb28d7a-f853-4925-9e8f-b8afba780e4e in namespace container-probe-9373 STEP: checking the pod's current state and verifying that restartCount is present Jan 25 21:10:10.999: INFO: Initial restart count of pod test-webserver-7bb28d7a-f853-4925-9e8f-b8afba780e4e is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 25 21:14:12.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9373" for this suite. • [SLOW TEST:253.962 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":3,"skipped":6,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 25 21:14:12.698: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Jan 25 21:14:12.864: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6864 /api/v1/namespaces/watch-6864/configmaps/e2e-watch-test-configmap-a 3bce8664-34ff-4a5a-892e-bd35e04a80ee 4321855 0 2020-01-25 21:14:12 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Jan 25 21:14:12.865: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6864 /api/v1/namespaces/watch-6864/configmaps/e2e-watch-test-configmap-a 3bce8664-34ff-4a5a-892e-bd35e04a80ee 4321855 0 2020-01-25 21:14:12 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Jan 25 21:14:22.888: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6864 /api/v1/namespaces/watch-6864/configmaps/e2e-watch-test-configmap-a 3bce8664-34ff-4a5a-892e-bd35e04a80ee 4321890 0 2020-01-25 21:14:12 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Jan 25 21:14:22.889: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6864 /api/v1/namespaces/watch-6864/configmaps/e2e-watch-test-configmap-a 3bce8664-34ff-4a5a-892e-bd35e04a80ee 4321890 0 2020-01-25 21:14:12 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Jan 25 21:14:32.906: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6864 /api/v1/namespaces/watch-6864/configmaps/e2e-watch-test-configmap-a 3bce8664-34ff-4a5a-892e-bd35e04a80ee 4321914 0 2020-01-25 21:14:12 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jan 25 21:14:32.907: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6864 /api/v1/namespaces/watch-6864/configmaps/e2e-watch-test-configmap-a 3bce8664-34ff-4a5a-892e-bd35e04a80ee 4321914 0 2020-01-25 21:14:12 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Jan 25 21:14:42.922: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6864 /api/v1/namespaces/watch-6864/configmaps/e2e-watch-test-configmap-a 3bce8664-34ff-4a5a-892e-bd35e04a80ee 4321936 0 2020-01-25 21:14:12 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jan 25 21:14:42.923: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6864 /api/v1/namespaces/watch-6864/configmaps/e2e-watch-test-configmap-a 3bce8664-34ff-4a5a-892e-bd35e04a80ee 4321936 0 2020-01-25 21:14:12 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Jan 25 21:14:52.936: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-6864 /api/v1/namespaces/watch-6864/configmaps/e2e-watch-test-configmap-b 65d84d48-3de8-432b-8c18-bf6c59cb6f35 4321960 0 2020-01-25 21:14:52 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Jan 25 21:14:52.936: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-6864 /api/v1/namespaces/watch-6864/configmaps/e2e-watch-test-configmap-b 65d84d48-3de8-432b-8c18-bf6c59cb6f35 4321960 0 2020-01-25 21:14:52 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Jan 25 21:15:02.952: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-6864 /api/v1/namespaces/watch-6864/configmaps/e2e-watch-test-configmap-b 65d84d48-3de8-432b-8c18-bf6c59cb6f35 4321982 0 2020-01-25 21:14:52 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Jan 25 21:15:02.952: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-6864 /api/v1/namespaces/watch-6864/configmaps/e2e-watch-test-configmap-b 65d84d48-3de8-432b-8c18-bf6c59cb6f35 4321982 0 2020-01-25 21:14:52 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 25 21:15:12.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-6864" for this suite. • [SLOW TEST:60.289 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":278,"completed":4,"skipped":32,"failed":0} SSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 25 21:15:12.988: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Jan 25 21:15:27.256: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 25 21:15:27.308: INFO: Pod pod-with-prestop-exec-hook still exists Jan 25 21:15:29.309: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 25 21:15:29.318: INFO: Pod pod-with-prestop-exec-hook still exists Jan 25 21:15:31.309: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 25 21:15:31.317: INFO: Pod pod-with-prestop-exec-hook still exists Jan 25 21:15:33.309: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 25 21:15:33.317: INFO: Pod pod-with-prestop-exec-hook still exists Jan 25 21:15:35.309: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 25 21:15:35.316: INFO: Pod pod-with-prestop-exec-hook still exists Jan 25 21:15:37.309: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 25 21:15:37.317: INFO: Pod pod-with-prestop-exec-hook still exists Jan 25 21:15:39.309: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 25 21:15:39.319: INFO: Pod pod-with-prestop-exec-hook still exists Jan 25 21:15:41.309: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 25 21:15:41.316: INFO: Pod pod-with-prestop-exec-hook still exists Jan 25 21:15:43.309: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 25 21:15:43.317: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 25 21:15:43.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-5621" for this suite. • [SLOW TEST:30.381 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":278,"completed":5,"skipped":37,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 25 21:15:43.370: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on node default medium Jan 25 21:15:43.647: INFO: Waiting up to 5m0s for pod "pod-d67eabae-2641-4eee-9bf4-4c7c4258a711" in namespace "emptydir-3973" to be "success or failure" Jan 25 21:15:43.671: INFO: Pod "pod-d67eabae-2641-4eee-9bf4-4c7c4258a711": Phase="Pending", Reason="", readiness=false. Elapsed: 22.943764ms Jan 25 21:15:45.680: INFO: Pod "pod-d67eabae-2641-4eee-9bf4-4c7c4258a711": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032564585s Jan 25 21:15:47.686: INFO: Pod "pod-d67eabae-2641-4eee-9bf4-4c7c4258a711": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038064339s Jan 25 21:15:49.694: INFO: Pod "pod-d67eabae-2641-4eee-9bf4-4c7c4258a711": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046809532s Jan 25 21:15:51.704: INFO: Pod "pod-d67eabae-2641-4eee-9bf4-4c7c4258a711": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.056203502s STEP: Saw pod success Jan 25 21:15:51.704: INFO: Pod "pod-d67eabae-2641-4eee-9bf4-4c7c4258a711" satisfied condition "success or failure" Jan 25 21:15:51.708: INFO: Trying to get logs from node jerma-node pod pod-d67eabae-2641-4eee-9bf4-4c7c4258a711 container test-container: STEP: delete the pod Jan 25 21:15:51.770: INFO: Waiting for pod pod-d67eabae-2641-4eee-9bf4-4c7c4258a711 to disappear Jan 25 21:15:51.782: INFO: Pod pod-d67eabae-2641-4eee-9bf4-4c7c4258a711 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 25 21:15:51.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3973" for this suite. • [SLOW TEST:8.437 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":6,"skipped":46,"failed":0} [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 25 21:15:51.808: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6226.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-6226.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6226.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6226.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-6226.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-6226.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-6226.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-6226.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6226.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6226.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-6226.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6226.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-6226.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-6226.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-6226.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-6226.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-6226.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6226.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 25 21:16:04.086: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6226.svc.cluster.local from pod dns-6226/dns-test-0598c043-e850-45a2-9464-243c22307e5f: the server could not find the requested resource (get pods dns-test-0598c043-e850-45a2-9464-243c22307e5f) Jan 25 21:16:04.091: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6226.svc.cluster.local from pod dns-6226/dns-test-0598c043-e850-45a2-9464-243c22307e5f: the server could not find the requested resource (get pods dns-test-0598c043-e850-45a2-9464-243c22307e5f) Jan 25 21:16:04.094: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6226.svc.cluster.local from pod dns-6226/dns-test-0598c043-e850-45a2-9464-243c22307e5f: the server could not find the requested resource (get pods dns-test-0598c043-e850-45a2-9464-243c22307e5f) Jan 25 21:16:04.097: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6226.svc.cluster.local from pod dns-6226/dns-test-0598c043-e850-45a2-9464-243c22307e5f: the server could not find the requested resource (get pods dns-test-0598c043-e850-45a2-9464-243c22307e5f) Jan 25 21:16:04.111: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6226.svc.cluster.local from pod dns-6226/dns-test-0598c043-e850-45a2-9464-243c22307e5f: the server could not find the requested resource (get pods dns-test-0598c043-e850-45a2-9464-243c22307e5f) Jan 25 21:16:04.114: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6226.svc.cluster.local from pod dns-6226/dns-test-0598c043-e850-45a2-9464-243c22307e5f: the server could not find the requested resource (get pods dns-test-0598c043-e850-45a2-9464-243c22307e5f) Jan 25 21:16:04.118: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6226.svc.cluster.local from pod dns-6226/dns-test-0598c043-e850-45a2-9464-243c22307e5f: the server could not find the requested resource (get pods dns-test-0598c043-e850-45a2-9464-243c22307e5f) Jan 25 21:16:04.121: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6226.svc.cluster.local from pod dns-6226/dns-test-0598c043-e850-45a2-9464-243c22307e5f: the server could not find the requested resource (get pods dns-test-0598c043-e850-45a2-9464-243c22307e5f) Jan 25 21:16:04.126: INFO: Lookups using dns-6226/dns-test-0598c043-e850-45a2-9464-243c22307e5f failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6226.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6226.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6226.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6226.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6226.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6226.svc.cluster.local jessie_udp@dns-test-service-2.dns-6226.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6226.svc.cluster.local] Jan 25 21:16:09.150: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6226.svc.cluster.local from pod dns-6226/dns-test-0598c043-e850-45a2-9464-243c22307e5f: the server could not find the requested resource (get pods dns-test-0598c043-e850-45a2-9464-243c22307e5f) Jan 25 21:16:09.157: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6226.svc.cluster.local from pod dns-6226/dns-test-0598c043-e850-45a2-9464-243c22307e5f: the server could not find the requested resource (get pods dns-test-0598c043-e850-45a2-9464-243c22307e5f) Jan 25 21:16:09.163: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6226.svc.cluster.local from pod dns-6226/dns-test-0598c043-e850-45a2-9464-243c22307e5f: the server could not find the requested resource (get pods dns-test-0598c043-e850-45a2-9464-243c22307e5f) Jan 25 21:16:09.168: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6226.svc.cluster.local from pod dns-6226/dns-test-0598c043-e850-45a2-9464-243c22307e5f: the server could not find the requested resource (get pods dns-test-0598c043-e850-45a2-9464-243c22307e5f) Jan 25 21:16:09.186: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6226.svc.cluster.local from pod dns-6226/dns-test-0598c043-e850-45a2-9464-243c22307e5f: the server could not find the requested resource (get pods dns-test-0598c043-e850-45a2-9464-243c22307e5f) Jan 25 21:16:09.190: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6226.svc.cluster.local from pod dns-6226/dns-test-0598c043-e850-45a2-9464-243c22307e5f: the server could not find the requested resource (get pods dns-test-0598c043-e850-45a2-9464-243c22307e5f) Jan 25 21:16:09.202: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6226.svc.cluster.local from pod dns-6226/dns-test-0598c043-e850-45a2-9464-243c22307e5f: the server could not find the requested resource (get pods dns-test-0598c043-e850-45a2-9464-243c22307e5f) Jan 25 21:16:09.213: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6226.svc.cluster.local from pod dns-6226/dns-test-0598c043-e850-45a2-9464-243c22307e5f: the server could not find the requested resource (get pods dns-test-0598c043-e850-45a2-9464-243c22307e5f) Jan 25 21:16:09.225: INFO: Lookups using dns-6226/dns-test-0598c043-e850-45a2-9464-243c22307e5f failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6226.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6226.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6226.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6226.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6226.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6226.svc.cluster.local jessie_udp@dns-test-service-2.dns-6226.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6226.svc.cluster.local] Jan 25 21:16:14.141: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6226.svc.cluster.local from pod dns-6226/dns-test-0598c043-e850-45a2-9464-243c22307e5f: the server could not find the requested resource (get pods dns-test-0598c043-e850-45a2-9464-243c22307e5f) Jan 25 21:16:14.152: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6226.svc.cluster.local from pod dns-6226/dns-test-0598c043-e850-45a2-9464-243c22307e5f: the server could not find the requested resource (get pods dns-test-0598c043-e850-45a2-9464-243c22307e5f) Jan 25 21:16:14.168: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6226.svc.cluster.local from pod dns-6226/dns-test-0598c043-e850-45a2-9464-243c22307e5f: the server could not find the requested resource (get pods dns-test-0598c043-e850-45a2-9464-243c22307e5f) Jan 25 21:16:14.176: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6226.svc.cluster.local from pod dns-6226/dns-test-0598c043-e850-45a2-9464-243c22307e5f: the server could not find the requested resource (get pods dns-test-0598c043-e850-45a2-9464-243c22307e5f) Jan 25 21:16:14.194: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6226.svc.cluster.local from pod dns-6226/dns-test-0598c043-e850-45a2-9464-243c22307e5f: the server could not find the requested resource (get pods dns-test-0598c043-e850-45a2-9464-243c22307e5f) Jan 25 21:16:14.198: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6226.svc.cluster.local from pod dns-6226/dns-test-0598c043-e850-45a2-9464-243c22307e5f: the server could not find the requested resource (get pods dns-test-0598c043-e850-45a2-9464-243c22307e5f) Jan 25 21:16:14.204: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6226.svc.cluster.local from pod dns-6226/dns-test-0598c043-e850-45a2-9464-243c22307e5f: the server could not find the requested resource (get pods dns-test-0598c043-e850-45a2-9464-243c22307e5f) Jan 25 21:16:14.228: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6226.svc.cluster.local from pod dns-6226/dns-test-0598c043-e850-45a2-9464-243c22307e5f: the server could not find the requested resource (get pods dns-test-0598c043-e850-45a2-9464-243c22307e5f) Jan 25 21:16:14.244: INFO: Lookups using dns-6226/dns-test-0598c043-e850-45a2-9464-243c22307e5f failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6226.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6226.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6226.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6226.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6226.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6226.svc.cluster.local jessie_udp@dns-test-service-2.dns-6226.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6226.svc.cluster.local] Jan 25 21:16:19.137: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6226.svc.cluster.local from pod dns-6226/dns-test-0598c043-e850-45a2-9464-243c22307e5f: the server could not find the requested resource (get pods dns-test-0598c043-e850-45a2-9464-243c22307e5f) Jan 25 21:16:19.144: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6226.svc.cluster.local from pod dns-6226/dns-test-0598c043-e850-45a2-9464-243c22307e5f: the server could not find the requested resource (get pods dns-test-0598c043-e850-45a2-9464-243c22307e5f) Jan 25 21:16:19.150: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6226.svc.cluster.local from pod dns-6226/dns-test-0598c043-e850-45a2-9464-243c22307e5f: the server could not find the requested resource (get pods dns-test-0598c043-e850-45a2-9464-243c22307e5f) Jan 25 21:16:19.155: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6226.svc.cluster.local from pod dns-6226/dns-test-0598c043-e850-45a2-9464-243c22307e5f: the server could not find the requested resource (get pods dns-test-0598c043-e850-45a2-9464-243c22307e5f) Jan 25 21:16:19.172: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6226.svc.cluster.local from pod dns-6226/dns-test-0598c043-e850-45a2-9464-243c22307e5f: the server could not find the requested resource (get pods dns-test-0598c043-e850-45a2-9464-243c22307e5f) Jan 25 21:16:19.177: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6226.svc.cluster.local from pod dns-6226/dns-test-0598c043-e850-45a2-9464-243c22307e5f: the server could not find the requested resource (get pods dns-test-0598c043-e850-45a2-9464-243c22307e5f) Jan 25 21:16:19.183: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6226.svc.cluster.local from pod dns-6226/dns-test-0598c043-e850-45a2-9464-243c22307e5f: the server could not find the requested resource (get pods dns-test-0598c043-e850-45a2-9464-243c22307e5f) Jan 25 21:16:19.187: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6226.svc.cluster.local from pod dns-6226/dns-test-0598c043-e850-45a2-9464-243c22307e5f: the server could not find the requested resource (get pods dns-test-0598c043-e850-45a2-9464-243c22307e5f) Jan 25 21:16:19.198: INFO: Lookups using dns-6226/dns-test-0598c043-e850-45a2-9464-243c22307e5f failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6226.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6226.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6226.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6226.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6226.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6226.svc.cluster.local jessie_udp@dns-test-service-2.dns-6226.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6226.svc.cluster.local] Jan 25 21:16:24.136: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6226.svc.cluster.local from pod dns-6226/dns-test-0598c043-e850-45a2-9464-243c22307e5f: the server could not find the requested resource (get pods dns-test-0598c043-e850-45a2-9464-243c22307e5f) Jan 25 21:16:24.141: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6226.svc.cluster.local from pod dns-6226/dns-test-0598c043-e850-45a2-9464-243c22307e5f: the server could not find the requested resource (get pods dns-test-0598c043-e850-45a2-9464-243c22307e5f) Jan 25 21:16:24.145: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6226.svc.cluster.local from pod dns-6226/dns-test-0598c043-e850-45a2-9464-243c22307e5f: the server could not find the requested resource (get pods dns-test-0598c043-e850-45a2-9464-243c22307e5f) Jan 25 21:16:24.150: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6226.svc.cluster.local from pod dns-6226/dns-test-0598c043-e850-45a2-9464-243c22307e5f: the server could not find the requested resource (get pods dns-test-0598c043-e850-45a2-9464-243c22307e5f) Jan 25 21:16:24.165: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6226.svc.cluster.local from pod dns-6226/dns-test-0598c043-e850-45a2-9464-243c22307e5f: the server could not find the requested resource (get pods dns-test-0598c043-e850-45a2-9464-243c22307e5f) Jan 25 21:16:24.171: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6226.svc.cluster.local from pod dns-6226/dns-test-0598c043-e850-45a2-9464-243c22307e5f: the server could not find the requested resource (get pods dns-test-0598c043-e850-45a2-9464-243c22307e5f) Jan 25 21:16:24.175: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6226.svc.cluster.local from pod dns-6226/dns-test-0598c043-e850-45a2-9464-243c22307e5f: the server could not find the requested resource (get pods dns-test-0598c043-e850-45a2-9464-243c22307e5f) Jan 25 21:16:24.178: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6226.svc.cluster.local from pod dns-6226/dns-test-0598c043-e850-45a2-9464-243c22307e5f: the server could not find the requested resource (get pods dns-test-0598c043-e850-45a2-9464-243c22307e5f) Jan 25 21:16:24.184: INFO: Lookups using dns-6226/dns-test-0598c043-e850-45a2-9464-243c22307e5f failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6226.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6226.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6226.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6226.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6226.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6226.svc.cluster.local jessie_udp@dns-test-service-2.dns-6226.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6226.svc.cluster.local] Jan 25 21:16:29.137: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6226.svc.cluster.local from pod dns-6226/dns-test-0598c043-e850-45a2-9464-243c22307e5f: the server could not find the requested resource (get pods dns-test-0598c043-e850-45a2-9464-243c22307e5f) Jan 25 21:16:29.143: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6226.svc.cluster.local from pod dns-6226/dns-test-0598c043-e850-45a2-9464-243c22307e5f: the server could not find the requested resource (get pods dns-test-0598c043-e850-45a2-9464-243c22307e5f) Jan 25 21:16:29.150: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6226.svc.cluster.local from pod dns-6226/dns-test-0598c043-e850-45a2-9464-243c22307e5f: the server could not find the requested resource (get pods dns-test-0598c043-e850-45a2-9464-243c22307e5f) Jan 25 21:16:29.160: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6226.svc.cluster.local from pod dns-6226/dns-test-0598c043-e850-45a2-9464-243c22307e5f: the server could not find the requested resource (get pods dns-test-0598c043-e850-45a2-9464-243c22307e5f) Jan 25 21:16:29.187: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6226.svc.cluster.local from pod dns-6226/dns-test-0598c043-e850-45a2-9464-243c22307e5f: the server could not find the requested resource (get pods dns-test-0598c043-e850-45a2-9464-243c22307e5f) Jan 25 21:16:29.193: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6226.svc.cluster.local from pod dns-6226/dns-test-0598c043-e850-45a2-9464-243c22307e5f: the server could not find the requested resource (get pods dns-test-0598c043-e850-45a2-9464-243c22307e5f) Jan 25 21:16:29.202: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6226.svc.cluster.local from pod dns-6226/dns-test-0598c043-e850-45a2-9464-243c22307e5f: the server could not find the requested resource (get pods dns-test-0598c043-e850-45a2-9464-243c22307e5f) Jan 25 21:16:29.209: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6226.svc.cluster.local from pod dns-6226/dns-test-0598c043-e850-45a2-9464-243c22307e5f: the server could not find the requested resource (get pods dns-test-0598c043-e850-45a2-9464-243c22307e5f) Jan 25 21:16:29.222: INFO: Lookups using dns-6226/dns-test-0598c043-e850-45a2-9464-243c22307e5f failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6226.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6226.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6226.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6226.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6226.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6226.svc.cluster.local jessie_udp@dns-test-service-2.dns-6226.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6226.svc.cluster.local] Jan 25 21:16:34.186: INFO: DNS probes using dns-6226/dns-test-0598c043-e850-45a2-9464-243c22307e5f succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 25 21:16:34.315: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6226" for this suite. • [SLOW TEST:42.525 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":278,"completed":7,"skipped":46,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 25 21:16:34.335: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on node default medium Jan 25 21:16:34.506: INFO: Waiting up to 5m0s for pod "pod-959656e4-9be0-4d6d-8192-bb96d0f53fef" in namespace "emptydir-5610" to be "success or failure" Jan 25 21:16:34.524: INFO: Pod "pod-959656e4-9be0-4d6d-8192-bb96d0f53fef": Phase="Pending", Reason="", readiness=false. Elapsed: 17.966207ms Jan 25 21:16:36.533: INFO: Pod "pod-959656e4-9be0-4d6d-8192-bb96d0f53fef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026851615s Jan 25 21:16:38.543: INFO: Pod "pod-959656e4-9be0-4d6d-8192-bb96d0f53fef": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037600175s Jan 25 21:16:40.553: INFO: Pod "pod-959656e4-9be0-4d6d-8192-bb96d0f53fef": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047473412s Jan 25 21:16:42.566: INFO: Pod "pod-959656e4-9be0-4d6d-8192-bb96d0f53fef": Phase="Pending", Reason="", readiness=false. Elapsed: 8.060589878s Jan 25 21:16:44.578: INFO: Pod "pod-959656e4-9be0-4d6d-8192-bb96d0f53fef": Phase="Pending", Reason="", readiness=false. Elapsed: 10.072171594s Jan 25 21:16:46.590: INFO: Pod "pod-959656e4-9be0-4d6d-8192-bb96d0f53fef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.083877299s STEP: Saw pod success Jan 25 21:16:46.590: INFO: Pod "pod-959656e4-9be0-4d6d-8192-bb96d0f53fef" satisfied condition "success or failure" Jan 25 21:16:46.595: INFO: Trying to get logs from node jerma-node pod pod-959656e4-9be0-4d6d-8192-bb96d0f53fef container test-container: STEP: delete the pod Jan 25 21:16:46.696: INFO: Waiting for pod pod-959656e4-9be0-4d6d-8192-bb96d0f53fef to disappear Jan 25 21:16:46.713: INFO: Pod pod-959656e4-9be0-4d6d-8192-bb96d0f53fef no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 25 21:16:46.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5610" for this suite. • [SLOW TEST:12.411 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":8,"skipped":76,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 25 21:16:46.747: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jan 25 21:16:46.964: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8e62c8c5-43e7-4ae9-b468-bfcab65b77e9" in namespace "projected-4705" to be "success or failure" Jan 25 21:16:46.993: INFO: Pod "downwardapi-volume-8e62c8c5-43e7-4ae9-b468-bfcab65b77e9": Phase="Pending", Reason="", readiness=false. Elapsed: 28.420273ms Jan 25 21:16:49.006: INFO: Pod "downwardapi-volume-8e62c8c5-43e7-4ae9-b468-bfcab65b77e9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041440333s Jan 25 21:16:51.011: INFO: Pod "downwardapi-volume-8e62c8c5-43e7-4ae9-b468-bfcab65b77e9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046707473s Jan 25 21:16:53.017: INFO: Pod "downwardapi-volume-8e62c8c5-43e7-4ae9-b468-bfcab65b77e9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.052312764s Jan 25 21:16:55.025: INFO: Pod "downwardapi-volume-8e62c8c5-43e7-4ae9-b468-bfcab65b77e9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.06112417s Jan 25 21:16:57.032: INFO: Pod "downwardapi-volume-8e62c8c5-43e7-4ae9-b468-bfcab65b77e9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.067774551s STEP: Saw pod success Jan 25 21:16:57.032: INFO: Pod "downwardapi-volume-8e62c8c5-43e7-4ae9-b468-bfcab65b77e9" satisfied condition "success or failure" Jan 25 21:16:57.036: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-8e62c8c5-43e7-4ae9-b468-bfcab65b77e9 container client-container: STEP: delete the pod Jan 25 21:16:57.113: INFO: Waiting for pod downwardapi-volume-8e62c8c5-43e7-4ae9-b468-bfcab65b77e9 to disappear Jan 25 21:16:57.120: INFO: Pod downwardapi-volume-8e62c8c5-43e7-4ae9-b468-bfcab65b77e9 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 25 21:16:57.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4705" for this suite. • [SLOW TEST:10.398 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":9,"skipped":111,"failed":0} SSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 25 21:16:57.146: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-6156bd92-f9cf-44a7-8f36-de9514c565c7 STEP: Creating a pod to test consume configMaps Jan 25 21:16:57.397: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e23153af-37d3-4c82-bc4a-f6fc1b342a36" in namespace "projected-4202" to be "success or failure" Jan 25 21:16:57.422: INFO: Pod "pod-projected-configmaps-e23153af-37d3-4c82-bc4a-f6fc1b342a36": Phase="Pending", Reason="", readiness=false. Elapsed: 25.136433ms Jan 25 21:16:59.439: INFO: Pod "pod-projected-configmaps-e23153af-37d3-4c82-bc4a-f6fc1b342a36": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042354952s Jan 25 21:17:01.445: INFO: Pod "pod-projected-configmaps-e23153af-37d3-4c82-bc4a-f6fc1b342a36": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048525993s Jan 25 21:17:03.453: INFO: Pod "pod-projected-configmaps-e23153af-37d3-4c82-bc4a-f6fc1b342a36": Phase="Pending", Reason="", readiness=false. Elapsed: 6.055819777s Jan 25 21:17:05.461: INFO: Pod "pod-projected-configmaps-e23153af-37d3-4c82-bc4a-f6fc1b342a36": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.064653666s STEP: Saw pod success Jan 25 21:17:05.462: INFO: Pod "pod-projected-configmaps-e23153af-37d3-4c82-bc4a-f6fc1b342a36" satisfied condition "success or failure" Jan 25 21:17:05.465: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-e23153af-37d3-4c82-bc4a-f6fc1b342a36 container projected-configmap-volume-test: STEP: delete the pod Jan 25 21:17:05.527: INFO: Waiting for pod pod-projected-configmaps-e23153af-37d3-4c82-bc4a-f6fc1b342a36 to disappear Jan 25 21:17:05.538: INFO: Pod pod-projected-configmaps-e23153af-37d3-4c82-bc4a-f6fc1b342a36 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 25 21:17:05.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4202" for this suite. • [SLOW TEST:8.517 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":278,"completed":10,"skipped":114,"failed":0} SSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 25 21:17:05.663: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-a3171f8c-4cc5-45cc-bd3a-1cd73f5be5af STEP: Creating a pod to test consume configMaps Jan 25 21:17:05.910: INFO: Waiting up to 5m0s for pod "pod-configmaps-2b69ebab-2ea3-4a44-803b-de49553b1691" in namespace "configmap-4494" to be "success or failure" Jan 25 21:17:05.962: INFO: Pod "pod-configmaps-2b69ebab-2ea3-4a44-803b-de49553b1691": Phase="Pending", Reason="", readiness=false. Elapsed: 51.848698ms Jan 25 21:17:07.970: INFO: Pod "pod-configmaps-2b69ebab-2ea3-4a44-803b-de49553b1691": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059330374s Jan 25 21:17:09.982: INFO: Pod "pod-configmaps-2b69ebab-2ea3-4a44-803b-de49553b1691": Phase="Pending", Reason="", readiness=false. Elapsed: 4.0718679s Jan 25 21:17:12.030: INFO: Pod "pod-configmaps-2b69ebab-2ea3-4a44-803b-de49553b1691": Phase="Pending", Reason="", readiness=false. Elapsed: 6.11955982s Jan 25 21:17:14.037: INFO: Pod "pod-configmaps-2b69ebab-2ea3-4a44-803b-de49553b1691": Phase="Pending", Reason="", readiness=false. Elapsed: 8.126340011s Jan 25 21:17:16.046: INFO: Pod "pod-configmaps-2b69ebab-2ea3-4a44-803b-de49553b1691": Phase="Pending", Reason="", readiness=false. Elapsed: 10.135579292s Jan 25 21:17:18.051: INFO: Pod "pod-configmaps-2b69ebab-2ea3-4a44-803b-de49553b1691": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.140691999s STEP: Saw pod success Jan 25 21:17:18.051: INFO: Pod "pod-configmaps-2b69ebab-2ea3-4a44-803b-de49553b1691" satisfied condition "success or failure" Jan 25 21:17:18.055: INFO: Trying to get logs from node jerma-node pod pod-configmaps-2b69ebab-2ea3-4a44-803b-de49553b1691 container configmap-volume-test: STEP: delete the pod Jan 25 21:17:18.090: INFO: Waiting for pod pod-configmaps-2b69ebab-2ea3-4a44-803b-de49553b1691 to disappear Jan 25 21:17:18.111: INFO: Pod pod-configmaps-2b69ebab-2ea3-4a44-803b-de49553b1691 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 25 21:17:18.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4494" for this suite. • [SLOW TEST:12.553 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":11,"skipped":119,"failed":0} [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 25 21:17:18.217: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: getting the auto-created API token Jan 25 21:17:18.923: INFO: created pod pod-service-account-defaultsa Jan 25 21:17:18.923: INFO: pod pod-service-account-defaultsa service account token volume mount: true Jan 25 21:17:18.960: INFO: created pod pod-service-account-mountsa Jan 25 21:17:18.961: INFO: pod pod-service-account-mountsa service account token volume mount: true Jan 25 21:17:19.029: INFO: created pod pod-service-account-nomountsa Jan 25 21:17:19.030: INFO: pod pod-service-account-nomountsa service account token volume mount: false Jan 25 21:17:19.057: INFO: created pod pod-service-account-defaultsa-mountspec Jan 25 21:17:19.057: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Jan 25 21:17:19.082: INFO: created pod pod-service-account-mountsa-mountspec Jan 25 21:17:19.083: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Jan 25 21:17:19.108: INFO: created pod pod-service-account-nomountsa-mountspec Jan 25 21:17:19.108: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Jan 25 21:17:19.125: INFO: created pod pod-service-account-defaultsa-nomountspec Jan 25 21:17:19.125: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Jan 25 21:17:19.217: INFO: created pod pod-service-account-mountsa-nomountspec Jan 25 21:17:19.217: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Jan 25 21:17:19.244: INFO: created pod pod-service-account-nomountsa-nomountspec Jan 25 21:17:19.244: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 25 21:17:19.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-1742" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":278,"completed":12,"skipped":119,"failed":0} SSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 25 21:17:20.933: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating pod Jan 25 21:17:49.243: INFO: Pod pod-hostip-d24004e5-cab0-4bd4-bee8-31a4243dae4b has hostIP: 10.96.2.250 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 25 21:17:49.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2943" for this suite. • [SLOW TEST:28.334 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":278,"completed":13,"skipped":125,"failed":0} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 25 21:17:49.267: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 25 21:17:50.163: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 25 21:17:52.180: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715583870, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715583870, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715583870, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715583870, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 25 21:17:54.187: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715583870, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715583870, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715583870, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715583870, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 25 21:17:56.187: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715583870, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715583870, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715583870, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715583870, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 25 21:17:58.251: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715583870, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715583870, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715583870, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715583870, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 25 21:18:01.256: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 25 21:18:01.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3103" for this suite. STEP: Destroying namespace "webhook-3103-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:12.335 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":278,"completed":14,"skipped":126,"failed":0} [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 25 21:18:01.603: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 25 21:18:02.260: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 25 21:18:04.272: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715583882, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715583882, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715583882, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715583882, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 25 21:18:06.280: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715583882, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715583882, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715583882, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715583882, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 25 21:18:08.278: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715583882, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715583882, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715583882, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715583882, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 25 21:18:10.279: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715583882, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715583882, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715583882, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715583882, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 25 21:18:12.280: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715583882, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715583882, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715583882, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715583882, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 25 21:18:15.311: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Jan 25 21:18:15.350: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 25 21:18:15.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-636" for this suite. STEP: Destroying namespace "webhook-636-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:14.108 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":278,"completed":15,"skipped":126,"failed":0} [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 25 21:18:15.711: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs Jan 25 21:18:15.929: INFO: Waiting up to 5m0s for pod "pod-656c8a3c-5d4c-40ea-a565-aa8176282fe4" in namespace "emptydir-6441" to be "success or failure" Jan 25 21:18:15.946: INFO: Pod "pod-656c8a3c-5d4c-40ea-a565-aa8176282fe4": Phase="Pending", Reason="", readiness=false. Elapsed: 16.428811ms Jan 25 21:18:17.954: INFO: Pod "pod-656c8a3c-5d4c-40ea-a565-aa8176282fe4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024578908s Jan 25 21:18:19.961: INFO: Pod "pod-656c8a3c-5d4c-40ea-a565-aa8176282fe4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03171089s Jan 25 21:18:21.966: INFO: Pod "pod-656c8a3c-5d4c-40ea-a565-aa8176282fe4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.036234857s Jan 25 21:18:24.766: INFO: Pod "pod-656c8a3c-5d4c-40ea-a565-aa8176282fe4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.836434016s Jan 25 21:18:26.771: INFO: Pod "pod-656c8a3c-5d4c-40ea-a565-aa8176282fe4": Phase="Pending", Reason="", readiness=false. Elapsed: 10.841221403s Jan 25 21:18:28.778: INFO: Pod "pod-656c8a3c-5d4c-40ea-a565-aa8176282fe4": Phase="Pending", Reason="", readiness=false. Elapsed: 12.848396147s Jan 25 21:18:30.783: INFO: Pod "pod-656c8a3c-5d4c-40ea-a565-aa8176282fe4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.853565942s STEP: Saw pod success Jan 25 21:18:30.783: INFO: Pod "pod-656c8a3c-5d4c-40ea-a565-aa8176282fe4" satisfied condition "success or failure" Jan 25 21:18:30.787: INFO: Trying to get logs from node jerma-node pod pod-656c8a3c-5d4c-40ea-a565-aa8176282fe4 container test-container: STEP: delete the pod Jan 25 21:18:30.926: INFO: Waiting for pod pod-656c8a3c-5d4c-40ea-a565-aa8176282fe4 to disappear Jan 25 21:18:30.943: INFO: Pod pod-656c8a3c-5d4c-40ea-a565-aa8176282fe4 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 25 21:18:30.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6441" for this suite. • [SLOW TEST:15.247 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":16,"skipped":126,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 25 21:18:30.961: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs Jan 25 21:18:31.223: INFO: Waiting up to 5m0s for pod "pod-5aeade0f-897d-4454-b7e3-93e3e9aed16c" in namespace "emptydir-711" to be "success or failure" Jan 25 21:18:31.233: INFO: Pod "pod-5aeade0f-897d-4454-b7e3-93e3e9aed16c": Phase="Pending", Reason="", readiness=false. Elapsed: 9.987213ms Jan 25 21:18:33.240: INFO: Pod "pod-5aeade0f-897d-4454-b7e3-93e3e9aed16c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016968553s Jan 25 21:18:35.247: INFO: Pod "pod-5aeade0f-897d-4454-b7e3-93e3e9aed16c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023134541s Jan 25 21:18:37.253: INFO: Pod "pod-5aeade0f-897d-4454-b7e3-93e3e9aed16c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.029636308s Jan 25 21:18:39.265: INFO: Pod "pod-5aeade0f-897d-4454-b7e3-93e3e9aed16c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.041355818s STEP: Saw pod success Jan 25 21:18:39.265: INFO: Pod "pod-5aeade0f-897d-4454-b7e3-93e3e9aed16c" satisfied condition "success or failure" Jan 25 21:18:39.275: INFO: Trying to get logs from node jerma-node pod pod-5aeade0f-897d-4454-b7e3-93e3e9aed16c container test-container: STEP: delete the pod Jan 25 21:18:39.348: INFO: Waiting for pod pod-5aeade0f-897d-4454-b7e3-93e3e9aed16c to disappear Jan 25 21:18:39.371: INFO: Pod pod-5aeade0f-897d-4454-b7e3-93e3e9aed16c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 25 21:18:39.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-711" for this suite. • [SLOW TEST:8.428 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":17,"skipped":224,"failed":0} SSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 25 21:18:39.389: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Jan 25 21:18:39.501: INFO: Waiting up to 5m0s for pod "downward-api-ae641936-f20b-41ce-b797-d09156da0036" in namespace "downward-api-2560" to be "success or failure" Jan 25 21:18:39.529: INFO: Pod "downward-api-ae641936-f20b-41ce-b797-d09156da0036": Phase="Pending", Reason="", readiness=false. Elapsed: 27.966529ms Jan 25 21:18:41.537: INFO: Pod "downward-api-ae641936-f20b-41ce-b797-d09156da0036": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035691028s Jan 25 21:18:43.546: INFO: Pod "downward-api-ae641936-f20b-41ce-b797-d09156da0036": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044295937s Jan 25 21:18:45.555: INFO: Pod "downward-api-ae641936-f20b-41ce-b797-d09156da0036": Phase="Pending", Reason="", readiness=false. Elapsed: 6.05368485s Jan 25 21:18:47.566: INFO: Pod "downward-api-ae641936-f20b-41ce-b797-d09156da0036": Phase="Pending", Reason="", readiness=false. Elapsed: 8.064261493s Jan 25 21:18:49.574: INFO: Pod "downward-api-ae641936-f20b-41ce-b797-d09156da0036": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.072218828s STEP: Saw pod success Jan 25 21:18:49.574: INFO: Pod "downward-api-ae641936-f20b-41ce-b797-d09156da0036" satisfied condition "success or failure" Jan 25 21:18:49.578: INFO: Trying to get logs from node jerma-node pod downward-api-ae641936-f20b-41ce-b797-d09156da0036 container dapi-container: STEP: delete the pod Jan 25 21:18:49.730: INFO: Waiting for pod downward-api-ae641936-f20b-41ce-b797-d09156da0036 to disappear Jan 25 21:18:49.763: INFO: Pod downward-api-ae641936-f20b-41ce-b797-d09156da0036 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 25 21:18:49.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2560" for this suite. • [SLOW TEST:10.424 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":278,"completed":18,"skipped":228,"failed":0} SSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 25 21:18:49.815: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: getting the auto-created API token STEP: reading a file in the container Jan 25 21:18:56.910: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9391 pod-service-account-c1d40c73-ed3b-4d87-8bdb-90da91d283ae -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Jan 25 21:18:58.986: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9391 pod-service-account-c1d40c73-ed3b-4d87-8bdb-90da91d283ae -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Jan 25 21:18:59.313: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9391 pod-service-account-c1d40c73-ed3b-4d87-8bdb-90da91d283ae -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 25 21:18:59.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-9391" for this suite. • [SLOW TEST:9.824 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":278,"completed":19,"skipped":234,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 25 21:18:59.639: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [BeforeEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1768 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Jan 25 21:18:59.698: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-1231' Jan 25 21:18:59.816: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 25 21:18:59.816: INFO: stdout: "job.batch/e2e-test-httpd-job created\n" STEP: verifying the job e2e-test-httpd-job was created [AfterEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1773 Jan 25 21:18:59.829: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-httpd-job --namespace=kubectl-1231' Jan 25 21:18:59.942: INFO: stderr: "" Jan 25 21:18:59.942: INFO: stdout: "job.batch \"e2e-test-httpd-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 25 21:18:59.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1231" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance]","total":278,"completed":20,"skipped":237,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 25 21:18:59.956: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-033d7018-5f79-4456-832e-23206d9c4a02 STEP: Creating secret with name s-test-opt-upd-7f987226-b375-4387-9b40-f11a926246d9 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-033d7018-5f79-4456-832e-23206d9c4a02 STEP: Updating secret s-test-opt-upd-7f987226-b375-4387-9b40-f11a926246d9 STEP: Creating secret with name s-test-opt-create-1b5b6ea0-26d8-42fc-82af-61681a32bc0a STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 25 21:19:16.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3342" for this suite. • [SLOW TEST:16.339 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":21,"skipped":261,"failed":0} SSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 25 21:19:16.296: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-8778 STEP: creating a selector STEP: Creating the service pods in kubernetes Jan 25 21:19:16.523: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jan 25 21:19:56.718: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.44.0.1 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8778 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 25 21:19:56.718: INFO: >>> kubeConfig: /root/.kube/config I0125 21:19:56.779273 8 log.go:172] (0xc002c30370) (0xc001fe01e0) Create stream I0125 21:19:56.779379 8 log.go:172] (0xc002c30370) (0xc001fe01e0) Stream added, broadcasting: 1 I0125 21:19:56.783793 8 log.go:172] (0xc002c30370) Reply frame received for 1 I0125 21:19:56.783846 8 log.go:172] (0xc002c30370) (0xc001b97900) Create stream I0125 21:19:56.783858 8 log.go:172] (0xc002c30370) (0xc001b97900) Stream added, broadcasting: 3 I0125 21:19:56.787871 8 log.go:172] (0xc002c30370) Reply frame received for 3 I0125 21:19:56.787916 8 log.go:172] (0xc002c30370) (0xc001b0dd60) Create stream I0125 21:19:56.787931 8 log.go:172] (0xc002c30370) (0xc001b0dd60) Stream added, broadcasting: 5 I0125 21:19:56.790320 8 log.go:172] (0xc002c30370) Reply frame received for 5 I0125 21:19:57.898213 8 log.go:172] (0xc002c30370) Data frame received for 3 I0125 21:19:57.898421 8 log.go:172] (0xc001b97900) (3) Data frame handling I0125 21:19:57.898480 8 log.go:172] (0xc001b97900) (3) Data frame sent I0125 21:19:58.013218 8 log.go:172] (0xc002c30370) (0xc001b97900) Stream removed, broadcasting: 3 I0125 21:19:58.013423 8 log.go:172] (0xc002c30370) Data frame received for 1 I0125 21:19:58.013474 8 log.go:172] (0xc001fe01e0) (1) Data frame handling I0125 21:19:58.013526 8 log.go:172] (0xc002c30370) (0xc001b0dd60) Stream removed, broadcasting: 5 I0125 21:19:58.013601 8 log.go:172] (0xc001fe01e0) (1) Data frame sent I0125 21:19:58.013640 8 log.go:172] (0xc002c30370) (0xc001fe01e0) Stream removed, broadcasting: 1 I0125 21:19:58.013693 8 log.go:172] (0xc002c30370) Go away received I0125 21:19:58.014051 8 log.go:172] (0xc002c30370) (0xc001fe01e0) Stream removed, broadcasting: 1 I0125 21:19:58.014079 8 log.go:172] (0xc002c30370) (0xc001b97900) Stream removed, broadcasting: 3 I0125 21:19:58.014093 8 log.go:172] (0xc002c30370) (0xc001b0dd60) Stream removed, broadcasting: 5 Jan 25 21:19:58.014: INFO: Found all expected endpoints: [netserver-0] Jan 25 21:19:58.020: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8778 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 25 21:19:58.020: INFO: >>> kubeConfig: /root/.kube/config I0125 21:19:58.069776 8 log.go:172] (0xc002b04370) (0xc001beb720) Create stream I0125 21:19:58.069899 8 log.go:172] (0xc002b04370) (0xc001beb720) Stream added, broadcasting: 1 I0125 21:19:58.074318 8 log.go:172] (0xc002b04370) Reply frame received for 1 I0125 21:19:58.074358 8 log.go:172] (0xc002b04370) (0xc001b97b80) Create stream I0125 21:19:58.074369 8 log.go:172] (0xc002b04370) (0xc001b97b80) Stream added, broadcasting: 3 I0125 21:19:58.075875 8 log.go:172] (0xc002b04370) Reply frame received for 3 I0125 21:19:58.075897 8 log.go:172] (0xc002b04370) (0xc001beb7c0) Create stream I0125 21:19:58.075906 8 log.go:172] (0xc002b04370) (0xc001beb7c0) Stream added, broadcasting: 5 I0125 21:19:58.078287 8 log.go:172] (0xc002b04370) Reply frame received for 5 I0125 21:19:59.183465 8 log.go:172] (0xc002b04370) Data frame received for 3 I0125 21:19:59.183762 8 log.go:172] (0xc001b97b80) (3) Data frame handling I0125 21:19:59.183845 8 log.go:172] (0xc001b97b80) (3) Data frame sent I0125 21:19:59.266683 8 log.go:172] (0xc002b04370) Data frame received for 1 I0125 21:19:59.266818 8 log.go:172] (0xc001beb720) (1) Data frame handling I0125 21:19:59.266843 8 log.go:172] (0xc001beb720) (1) Data frame sent I0125 21:19:59.267106 8 log.go:172] (0xc002b04370) (0xc001beb720) Stream removed, broadcasting: 1 I0125 21:19:59.267764 8 log.go:172] (0xc002b04370) (0xc001b97b80) Stream removed, broadcasting: 3 I0125 21:19:59.268155 8 log.go:172] (0xc002b04370) (0xc001beb7c0) Stream removed, broadcasting: 5 I0125 21:19:59.268326 8 log.go:172] (0xc002b04370) (0xc001beb720) Stream removed, broadcasting: 1 I0125 21:19:59.268353 8 log.go:172] (0xc002b04370) (0xc001b97b80) Stream removed, broadcasting: 3 I0125 21:19:59.268374 8 log.go:172] (0xc002b04370) (0xc001beb7c0) Stream removed, broadcasting: 5 I0125 21:19:59.268874 8 log.go:172] (0xc002b04370) Go away received Jan 25 21:19:59.269: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 25 21:19:59.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-8778" for this suite. • [SLOW TEST:42.998 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":22,"skipped":265,"failed":0} [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 25 21:19:59.294: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-downwardapi-rgbv STEP: Creating a pod to test atomic-volume-subpath Jan 25 21:19:59.483: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-rgbv" in namespace "subpath-8148" to be "success or failure" Jan 25 21:19:59.497: INFO: Pod "pod-subpath-test-downwardapi-rgbv": Phase="Pending", Reason="", readiness=false. Elapsed: 13.345023ms Jan 25 21:20:01.505: INFO: Pod "pod-subpath-test-downwardapi-rgbv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021823255s Jan 25 21:20:03.514: INFO: Pod "pod-subpath-test-downwardapi-rgbv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030518261s Jan 25 21:20:05.757: INFO: Pod "pod-subpath-test-downwardapi-rgbv": Phase="Pending", Reason="", readiness=false. Elapsed: 6.273038426s Jan 25 21:20:07.896: INFO: Pod "pod-subpath-test-downwardapi-rgbv": Phase="Pending", Reason="", readiness=false. Elapsed: 8.412512896s Jan 25 21:20:09.905: INFO: Pod "pod-subpath-test-downwardapi-rgbv": Phase="Pending", Reason="", readiness=false. Elapsed: 10.421671277s Jan 25 21:20:11.916: INFO: Pod "pod-subpath-test-downwardapi-rgbv": Phase="Running", Reason="", readiness=true. Elapsed: 12.432360591s Jan 25 21:20:13.927: INFO: Pod "pod-subpath-test-downwardapi-rgbv": Phase="Running", Reason="", readiness=true. Elapsed: 14.443674331s Jan 25 21:20:15.939: INFO: Pod "pod-subpath-test-downwardapi-rgbv": Phase="Running", Reason="", readiness=true. Elapsed: 16.455060786s Jan 25 21:20:17.946: INFO: Pod "pod-subpath-test-downwardapi-rgbv": Phase="Running", Reason="", readiness=true. Elapsed: 18.462318027s Jan 25 21:20:19.952: INFO: Pod "pod-subpath-test-downwardapi-rgbv": Phase="Running", Reason="", readiness=true. Elapsed: 20.468047209s Jan 25 21:20:21.963: INFO: Pod "pod-subpath-test-downwardapi-rgbv": Phase="Running", Reason="", readiness=true. Elapsed: 22.479289468s Jan 25 21:20:23.971: INFO: Pod "pod-subpath-test-downwardapi-rgbv": Phase="Running", Reason="", readiness=true. Elapsed: 24.487694102s Jan 25 21:20:25.979: INFO: Pod "pod-subpath-test-downwardapi-rgbv": Phase="Running", Reason="", readiness=true. Elapsed: 26.495599757s Jan 25 21:20:27.984: INFO: Pod "pod-subpath-test-downwardapi-rgbv": Phase="Running", Reason="", readiness=true. Elapsed: 28.500414402s Jan 25 21:20:29.990: INFO: Pod "pod-subpath-test-downwardapi-rgbv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.506246627s STEP: Saw pod success Jan 25 21:20:29.990: INFO: Pod "pod-subpath-test-downwardapi-rgbv" satisfied condition "success or failure" Jan 25 21:20:29.993: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-downwardapi-rgbv container test-container-subpath-downwardapi-rgbv: STEP: delete the pod Jan 25 21:20:30.073: INFO: Waiting for pod pod-subpath-test-downwardapi-rgbv to disappear Jan 25 21:20:30.096: INFO: Pod pod-subpath-test-downwardapi-rgbv no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-rgbv Jan 25 21:20:30.096: INFO: Deleting pod "pod-subpath-test-downwardapi-rgbv" in namespace "subpath-8148" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 25 21:20:30.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8148" for this suite. • [SLOW TEST:30.812 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":278,"completed":23,"skipped":265,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 25 21:20:30.107: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [BeforeEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1576 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Jan 25 21:20:30.208: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-9657' Jan 25 21:20:30.326: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 25 21:20:30.326: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the pod controlled by e2e-test-httpd-deployment gets created [AfterEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1582 Jan 25 21:20:32.450: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-9657' Jan 25 21:20:32.596: INFO: stderr: "" Jan 25 21:20:32.597: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 25 21:20:32.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9657" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance]","total":278,"completed":24,"skipped":280,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 25 21:20:32.611: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-8589 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-8589 STEP: Creating statefulset with conflicting port in namespace statefulset-8589 STEP: Waiting until pod test-pod will start running in namespace statefulset-8589 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-8589 Jan 25 21:20:42.855: INFO: Observed stateful pod in namespace: statefulset-8589, name: ss-0, uid: 069c93de-0ac9-4d01-bd8e-12d820124210, status phase: Pending. Waiting for statefulset controller to delete. Jan 25 21:20:43.058: INFO: Observed stateful pod in namespace: statefulset-8589, name: ss-0, uid: 069c93de-0ac9-4d01-bd8e-12d820124210, status phase: Failed. Waiting for statefulset controller to delete. Jan 25 21:20:43.115: INFO: Observed stateful pod in namespace: statefulset-8589, name: ss-0, uid: 069c93de-0ac9-4d01-bd8e-12d820124210, status phase: Failed. Waiting for statefulset controller to delete. Jan 25 21:20:43.141: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-8589 STEP: Removing pod with conflicting port in namespace statefulset-8589 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-8589 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Jan 25 21:20:53.579: INFO: Deleting all statefulset in ns statefulset-8589 Jan 25 21:20:53.584: INFO: Scaling statefulset ss to 0 Jan 25 21:21:03.620: INFO: Waiting for statefulset status.replicas updated to 0 Jan 25 21:21:03.627: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 25 21:21:03.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8589" for this suite. • [SLOW TEST:31.070 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":278,"completed":25,"skipped":296,"failed":0} SSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 25 21:21:03.682: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod busybox-86cccaef-841f-478b-a5f2-c4efd9c9fe04 in namespace container-probe-5736 Jan 25 21:21:11.844: INFO: Started pod busybox-86cccaef-841f-478b-a5f2-c4efd9c9fe04 in namespace container-probe-5736 STEP: checking the pod's current state and verifying that restartCount is present Jan 25 21:21:11.848: INFO: Initial restart count of pod busybox-86cccaef-841f-478b-a5f2-c4efd9c9fe04 is 0 Jan 25 21:22:08.149: INFO: Restart count of pod container-probe-5736/busybox-86cccaef-841f-478b-a5f2-c4efd9c9fe04 is now 1 (56.300908245s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 25 21:22:08.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5736" for this suite. • [SLOW TEST:64.547 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":26,"skipped":302,"failed":0} SSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 25 21:22:08.231: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating Agnhost RC Jan 25 21:22:08.511: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7118' Jan 25 21:22:09.003: INFO: stderr: "" Jan 25 21:22:09.003: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Jan 25 21:22:10.042: INFO: Selector matched 1 pods for map[app:agnhost] Jan 25 21:22:10.042: INFO: Found 0 / 1 Jan 25 21:22:11.012: INFO: Selector matched 1 pods for map[app:agnhost] Jan 25 21:22:11.013: INFO: Found 0 / 1 Jan 25 21:22:12.088: INFO: Selector matched 1 pods for map[app:agnhost] Jan 25 21:22:12.088: INFO: Found 0 / 1 Jan 25 21:22:13.020: INFO: Selector matched 1 pods for map[app:agnhost] Jan 25 21:22:13.020: INFO: Found 0 / 1 Jan 25 21:22:14.012: INFO: Selector matched 1 pods for map[app:agnhost] Jan 25 21:22:14.012: INFO: Found 0 / 1 Jan 25 21:22:15.010: INFO: Selector matched 1 pods for map[app:agnhost] Jan 25 21:22:15.011: INFO: Found 0 / 1 Jan 25 21:22:16.015: INFO: Selector matched 1 pods for map[app:agnhost] Jan 25 21:22:16.016: INFO: Found 0 / 1 Jan 25 21:22:17.009: INFO: Selector matched 1 pods for map[app:agnhost] Jan 25 21:22:17.009: INFO: Found 0 / 1 Jan 25 21:22:18.008: INFO: Selector matched 1 pods for map[app:agnhost] Jan 25 21:22:18.008: INFO: Found 1 / 1 Jan 25 21:22:18.008: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Jan 25 21:22:18.016: INFO: Selector matched 1 pods for map[app:agnhost] Jan 25 21:22:18.016: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jan 25 21:22:18.017: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod agnhost-master-p5tfw --namespace=kubectl-7118 -p {"metadata":{"annotations":{"x":"y"}}}' Jan 25 21:22:18.174: INFO: stderr: "" Jan 25 21:22:18.174: INFO: stdout: "pod/agnhost-master-p5tfw patched\n" STEP: checking annotations Jan 25 21:22:18.179: INFO: Selector matched 1 pods for map[app:agnhost] Jan 25 21:22:18.179: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 25 21:22:18.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7118" for this suite. • [SLOW TEST:9.957 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1519 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":278,"completed":27,"skipped":308,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 25 21:22:18.188: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation Jan 25 21:22:18.328: INFO: >>> kubeConfig: /root/.kube/config Jan 25 21:22:21.950: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 25 21:22:34.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6758" for this suite. • [SLOW TEST:16.291 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":278,"completed":28,"skipped":315,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 25 21:22:34.480: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 25 21:22:34.640: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Jan 25 21:22:34.824: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-01-25T21:22:34Z generation:1 name:name1 resourceVersion:4323975 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:5ad731b3-dbd8-43e7-b49a-be6b9480d9cd] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Jan 25 21:22:44.835: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-01-25T21:22:44Z generation:1 name:name2 resourceVersion:4324002 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:0ab9ee9b-f444-4634-b60c-320f2dc76801] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Jan 25 21:22:54.844: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-01-25T21:22:34Z generation:2 name:name1 resourceVersion:4324028 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:5ad731b3-dbd8-43e7-b49a-be6b9480d9cd] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Jan 25 21:23:04.859: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-01-25T21:22:44Z generation:2 name:name2 resourceVersion:4324052 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:0ab9ee9b-f444-4634-b60c-320f2dc76801] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Jan 25 21:23:14.887: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-01-25T21:22:34Z generation:2 name:name1 resourceVersion:4324076 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:5ad731b3-dbd8-43e7-b49a-be6b9480d9cd] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Jan 25 21:23:24.957: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-01-25T21:22:44Z generation:2 name:name2 resourceVersion:4324098 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:0ab9ee9b-f444-4634-b60c-320f2dc76801] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 25 21:23:35.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-2709" for this suite. • [SLOW TEST:61.020 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:41 watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":278,"completed":29,"skipped":321,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 25 21:23:35.503: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6902.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-6902.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6902.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-6902.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 25 21:23:45.797: INFO: DNS probes using dns-test-494873ff-2f3a-4a6a-a0b9-1ef72281fd90 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6902.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-6902.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6902.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-6902.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 25 21:23:59.972: INFO: File wheezy_udp@dns-test-service-3.dns-6902.svc.cluster.local from pod dns-6902/dns-test-7aa85632-0d35-4b54-8a8c-e84f236fda4c contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 25 21:23:59.976: INFO: File jessie_udp@dns-test-service-3.dns-6902.svc.cluster.local from pod dns-6902/dns-test-7aa85632-0d35-4b54-8a8c-e84f236fda4c contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 25 21:23:59.976: INFO: Lookups using dns-6902/dns-test-7aa85632-0d35-4b54-8a8c-e84f236fda4c failed for: [wheezy_udp@dns-test-service-3.dns-6902.svc.cluster.local jessie_udp@dns-test-service-3.dns-6902.svc.cluster.local] Jan 25 21:24:04.985: INFO: File wheezy_udp@dns-test-service-3.dns-6902.svc.cluster.local from pod dns-6902/dns-test-7aa85632-0d35-4b54-8a8c-e84f236fda4c contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 25 21:24:04.990: INFO: File jessie_udp@dns-test-service-3.dns-6902.svc.cluster.local from pod dns-6902/dns-test-7aa85632-0d35-4b54-8a8c-e84f236fda4c contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 25 21:24:04.990: INFO: Lookups using dns-6902/dns-test-7aa85632-0d35-4b54-8a8c-e84f236fda4c failed for: [wheezy_udp@dns-test-service-3.dns-6902.svc.cluster.local jessie_udp@dns-test-service-3.dns-6902.svc.cluster.local] Jan 25 21:24:09.987: INFO: File wheezy_udp@dns-test-service-3.dns-6902.svc.cluster.local from pod dns-6902/dns-test-7aa85632-0d35-4b54-8a8c-e84f236fda4c contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 25 21:24:09.994: INFO: File jessie_udp@dns-test-service-3.dns-6902.svc.cluster.local from pod dns-6902/dns-test-7aa85632-0d35-4b54-8a8c-e84f236fda4c contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 25 21:24:09.994: INFO: Lookups using dns-6902/dns-test-7aa85632-0d35-4b54-8a8c-e84f236fda4c failed for: [wheezy_udp@dns-test-service-3.dns-6902.svc.cluster.local jessie_udp@dns-test-service-3.dns-6902.svc.cluster.local] Jan 25 21:24:14.991: INFO: DNS probes using dns-test-7aa85632-0d35-4b54-8a8c-e84f236fda4c succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6902.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-6902.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6902.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-6902.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 25 21:24:27.242: INFO: DNS probes using dns-test-daedcb25-177b-4a9c-abe0-9f199916017e succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 25 21:24:27.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6902" for this suite. • [SLOW TEST:52.019 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":278,"completed":30,"skipped":333,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 25 21:24:27.523: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jan 25 21:24:27.695: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4310d059-7fdf-4e7d-9528-5a5109de5bd3" in namespace "projected-9622" to be "success or failure" Jan 25 21:24:27.702: INFO: Pod "downwardapi-volume-4310d059-7fdf-4e7d-9528-5a5109de5bd3": Phase="Pending", Reason="", readiness=false. Elapsed: 7.120685ms Jan 25 21:24:29.738: INFO: Pod "downwardapi-volume-4310d059-7fdf-4e7d-9528-5a5109de5bd3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04350463s Jan 25 21:24:31.746: INFO: Pod "downwardapi-volume-4310d059-7fdf-4e7d-9528-5a5109de5bd3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.051495079s Jan 25 21:24:33.762: INFO: Pod "downwardapi-volume-4310d059-7fdf-4e7d-9528-5a5109de5bd3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.066846583s Jan 25 21:24:35.771: INFO: Pod "downwardapi-volume-4310d059-7fdf-4e7d-9528-5a5109de5bd3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.075969543s Jan 25 21:24:37.783: INFO: Pod "downwardapi-volume-4310d059-7fdf-4e7d-9528-5a5109de5bd3": Phase="Pending", Reason="", readiness=false. Elapsed: 10.088047413s Jan 25 21:24:39.792: INFO: Pod "downwardapi-volume-4310d059-7fdf-4e7d-9528-5a5109de5bd3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.097210569s STEP: Saw pod success Jan 25 21:24:39.792: INFO: Pod "downwardapi-volume-4310d059-7fdf-4e7d-9528-5a5109de5bd3" satisfied condition "success or failure" Jan 25 21:24:39.797: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-4310d059-7fdf-4e7d-9528-5a5109de5bd3 container client-container: STEP: delete the pod Jan 25 21:24:39.946: INFO: Waiting for pod downwardapi-volume-4310d059-7fdf-4e7d-9528-5a5109de5bd3 to disappear Jan 25 21:24:39.985: INFO: Pod downwardapi-volume-4310d059-7fdf-4e7d-9528-5a5109de5bd3 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 25 21:24:39.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9622" for this suite. • [SLOW TEST:12.475 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":31,"skipped":364,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 25 21:24:39.998: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 25 21:24:40.112: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 25 21:24:40.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-1190" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":278,"completed":32,"skipped":371,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 25 21:24:40.798: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override all Jan 25 21:24:41.163: INFO: Waiting up to 5m0s for pod "client-containers-5b4e13f8-567a-4ade-b115-288f0e62531c" in namespace "containers-5338" to be "success or failure" Jan 25 21:24:41.180: INFO: Pod "client-containers-5b4e13f8-567a-4ade-b115-288f0e62531c": Phase="Pending", Reason="", readiness=false. Elapsed: 16.724998ms Jan 25 21:24:43.185: INFO: Pod "client-containers-5b4e13f8-567a-4ade-b115-288f0e62531c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022252266s Jan 25 21:24:45.206: INFO: Pod "client-containers-5b4e13f8-567a-4ade-b115-288f0e62531c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042973852s Jan 25 21:24:47.217: INFO: Pod "client-containers-5b4e13f8-567a-4ade-b115-288f0e62531c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053663369s Jan 25 21:24:49.222: INFO: Pod "client-containers-5b4e13f8-567a-4ade-b115-288f0e62531c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.059245152s STEP: Saw pod success Jan 25 21:24:49.223: INFO: Pod "client-containers-5b4e13f8-567a-4ade-b115-288f0e62531c" satisfied condition "success or failure" Jan 25 21:24:49.226: INFO: Trying to get logs from node jerma-node pod client-containers-5b4e13f8-567a-4ade-b115-288f0e62531c container test-container: STEP: delete the pod Jan 25 21:24:49.273: INFO: Waiting for pod client-containers-5b4e13f8-567a-4ade-b115-288f0e62531c to disappear Jan 25 21:24:49.301: INFO: Pod client-containers-5b4e13f8-567a-4ade-b115-288f0e62531c no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 25 21:24:49.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-5338" for this suite. • [SLOW TEST:8.709 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":278,"completed":33,"skipped":414,"failed":0} SSSSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 25 21:24:49.508: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating Agnhost RC Jan 25 21:24:49.589: INFO: namespace kubectl-6910 Jan 25 21:24:49.590: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6910' Jan 25 21:24:50.066: INFO: stderr: "" Jan 25 21:24:50.066: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Jan 25 21:24:51.078: INFO: Selector matched 1 pods for map[app:agnhost] Jan 25 21:24:51.078: INFO: Found 0 / 1 Jan 25 21:24:52.074: INFO: Selector matched 1 pods for map[app:agnhost] Jan 25 21:24:52.074: INFO: Found 0 / 1 Jan 25 21:24:53.078: INFO: Selector matched 1 pods for map[app:agnhost] Jan 25 21:24:53.079: INFO: Found 0 / 1 Jan 25 21:24:54.076: INFO: Selector matched 1 pods for map[app:agnhost] Jan 25 21:24:54.076: INFO: Found 0 / 1 Jan 25 21:24:55.106: INFO: Selector matched 1 pods for map[app:agnhost] Jan 25 21:24:55.107: INFO: Found 0 / 1 Jan 25 21:24:56.080: INFO: Selector matched 1 pods for map[app:agnhost] Jan 25 21:24:56.080: INFO: Found 0 / 1 Jan 25 21:24:57.073: INFO: Selector matched 1 pods for map[app:agnhost] Jan 25 21:24:57.073: INFO: Found 0 / 1 Jan 25 21:24:58.077: INFO: Selector matched 1 pods for map[app:agnhost] Jan 25 21:24:58.078: INFO: Found 1 / 1 Jan 25 21:24:58.078: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jan 25 21:24:58.083: INFO: Selector matched 1 pods for map[app:agnhost] Jan 25 21:24:58.083: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jan 25 21:24:58.083: INFO: wait on agnhost-master startup in kubectl-6910 Jan 25 21:24:58.083: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs agnhost-master-cpx72 agnhost-master --namespace=kubectl-6910' Jan 25 21:24:58.320: INFO: stderr: "" Jan 25 21:24:58.321: INFO: stdout: "Paused\n" STEP: exposing RC Jan 25 21:24:58.321: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-6910' Jan 25 21:24:58.645: INFO: stderr: "" Jan 25 21:24:58.645: INFO: stdout: "service/rm2 exposed\n" Jan 25 21:24:58.696: INFO: Service rm2 in namespace kubectl-6910 found. STEP: exposing service Jan 25 21:25:00.714: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-6910' Jan 25 21:25:00.893: INFO: stderr: "" Jan 25 21:25:00.894: INFO: stdout: "service/rm3 exposed\n" Jan 25 21:25:00.900: INFO: Service rm3 in namespace kubectl-6910 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 25 21:25:02.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6910" for this suite. • [SLOW TEST:13.414 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1275 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":278,"completed":34,"skipped":419,"failed":0} SSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 25 21:25:02.923: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jan 25 21:25:03.004: INFO: Waiting up to 5m0s for pod "downwardapi-volume-76ddf9fb-1cc3-4639-b9b5-f618f05687ff" in namespace "downward-api-6384" to be "success or failure" Jan 25 21:25:03.012: INFO: Pod "downwardapi-volume-76ddf9fb-1cc3-4639-b9b5-f618f05687ff": Phase="Pending", Reason="", readiness=false. Elapsed: 7.854726ms Jan 25 21:25:05.055: INFO: Pod "downwardapi-volume-76ddf9fb-1cc3-4639-b9b5-f618f05687ff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050640359s Jan 25 21:25:07.073: INFO: Pod "downwardapi-volume-76ddf9fb-1cc3-4639-b9b5-f618f05687ff": Phase="Pending", Reason="", readiness=false. Elapsed: 4.068109422s Jan 25 21:25:09.078: INFO: Pod "downwardapi-volume-76ddf9fb-1cc3-4639-b9b5-f618f05687ff": Phase="Pending", Reason="", readiness=false. Elapsed: 6.073382495s Jan 25 21:25:11.088: INFO: Pod "downwardapi-volume-76ddf9fb-1cc3-4639-b9b5-f618f05687ff": Phase="Pending", Reason="", readiness=false. Elapsed: 8.083491632s Jan 25 21:25:13.094: INFO: Pod "downwardapi-volume-76ddf9fb-1cc3-4639-b9b5-f618f05687ff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.089117987s STEP: Saw pod success Jan 25 21:25:13.094: INFO: Pod "downwardapi-volume-76ddf9fb-1cc3-4639-b9b5-f618f05687ff" satisfied condition "success or failure" Jan 25 21:25:13.096: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-76ddf9fb-1cc3-4639-b9b5-f618f05687ff container client-container: STEP: delete the pod Jan 25 21:25:13.228: INFO: Waiting for pod downwardapi-volume-76ddf9fb-1cc3-4639-b9b5-f618f05687ff to disappear Jan 25 21:25:13.236: INFO: Pod downwardapi-volume-76ddf9fb-1cc3-4639-b9b5-f618f05687ff no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 25 21:25:13.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6384" for this suite. • [SLOW TEST:10.329 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":35,"skipped":424,"failed":0} SSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 25 21:25:13.253: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 25 21:25:13.315: INFO: Creating deployment "webserver-deployment" Jan 25 21:25:13.353: INFO: Waiting for observed generation 1 Jan 25 21:25:15.811: INFO: Waiting for all required pods to come up Jan 25 21:25:16.060: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Jan 25 21:25:42.484: INFO: Waiting for deployment "webserver-deployment" to complete Jan 25 21:25:42.494: INFO: Updating deployment "webserver-deployment" with a non-existent image Jan 25 21:25:42.505: INFO: Updating deployment webserver-deployment Jan 25 21:25:42.505: INFO: Waiting for observed generation 2 Jan 25 21:25:44.895: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Jan 25 21:25:44.935: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Jan 25 21:25:46.254: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Jan 25 21:25:46.601: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Jan 25 21:25:46.601: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Jan 25 21:25:46.606: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Jan 25 21:25:46.920: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Jan 25 21:25:46.920: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Jan 25 21:25:46.931: INFO: Updating deployment webserver-deployment Jan 25 21:25:46.931: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Jan 25 21:25:47.002: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Jan 25 21:25:51.565: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Jan 25 21:25:53.828: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-5265 /apis/apps/v1/namespaces/deployment-5265/deployments/webserver-deployment 32c804cb-4882-448b-8d2a-9134c3a83562 4324918 3 2020-01-25 21:25:13 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc000ca8008 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-01-25 21:25:47 +0000 UTC,LastTransitionTime:2020-01-25 21:25:47 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-01-25 21:25:52 +0000 UTC,LastTransitionTime:2020-01-25 21:25:13 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Jan 25 21:25:54.324: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8 deployment-5265 /apis/apps/v1/namespaces/deployment-5265/replicasets/webserver-deployment-c7997dcc8 f85a2b5e-6bae-4d5b-9c1e-2c967f2917dd 4324915 3 2020-01-25 21:25:42 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 32c804cb-4882-448b-8d2a-9134c3a83562 0xc000ca8527 0xc000ca8528}] [] []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc000ca8598 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 25 21:25:54.324: INFO: All old ReplicaSets of Deployment "webserver-deployment": Jan 25 21:25:54.324: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587 deployment-5265 /apis/apps/v1/namespaces/deployment-5265/replicasets/webserver-deployment-595b5b9587 4041c014-0ca6-4eae-8909-63d28979dfe2 4324903 3 2020-01-25 21:25:13 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 32c804cb-4882-448b-8d2a-9134c3a83562 0xc000ca8427 0xc000ca8428}] [] []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc000ca84c8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Jan 25 21:25:54.780: INFO: Pod "webserver-deployment-595b5b9587-4lqf7" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-4lqf7 webserver-deployment-595b5b9587- deployment-5265 /api/v1/namespaces/deployment-5265/pods/webserver-deployment-595b5b9587-4lqf7 c459fd1e-6898-4c6c-8389-2636c09964cb 4324740 0 2020-01-25 21:25:13 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 4041c014-0ca6-4eae-8909-63d28979dfe2 0xc000eb7607 0xc000eb7608}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-w2vr6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-w2vr6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-w2vr6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 21:25:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 21:25:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 21:25:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 21:25:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:10.32.0.5,StartTime:2020-01-25 21:25:13 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-25 21:25:37 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://b5d642e48281179b36ece0211a94c1b17277ab220854021ba21e3c377ec81d4f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.5,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 25 21:25:54.780: INFO: Pod "webserver-deployment-595b5b9587-4vvfk" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-4vvfk webserver-deployment-595b5b9587- deployment-5265 /api/v1/namespaces/deployment-5265/pods/webserver-deployment-595b5b9587-4vvfk de9f0fb7-ecd0-4afd-b571-0eb5fa690b45 4324911 0 2020-01-25 21:25:47 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 4041c014-0ca6-4eae-8909-63d28979dfe2 0xc000eb7880 0xc000eb7881}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-w2vr6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-w2vr6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-w2vr6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 21:25:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 21:25:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 21:25:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 21:25:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-01-25 21:25:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 25 21:25:54.780: INFO: Pod "webserver-deployment-595b5b9587-5cjfr" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-5cjfr webserver-deployment-595b5b9587- deployment-5265 /api/v1/namespaces/deployment-5265/pods/webserver-deployment-595b5b9587-5cjfr e42dc874-501a-4bb2-8652-d06b79e5290a 4324743 0 2020-01-25 21:25:13 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 4041c014-0ca6-4eae-8909-63d28979dfe2 0xc000eb7af7 0xc000eb7af8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-w2vr6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-w2vr6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-w2vr6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 21:25:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 21:25:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 21:25:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 21:25:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:10.32.0.7,StartTime:2020-01-25 21:25:13 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-25 21:25:38 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://facb287647b709b9c18558190f14a3b03fa36e96efb57b5ede9f5918bfdf26f8,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.7,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 25 21:25:54.781: INFO: Pod "webserver-deployment-595b5b9587-69kn5" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-69kn5 webserver-deployment-595b5b9587- deployment-5265 /api/v1/namespaces/deployment-5265/pods/webserver-deployment-595b5b9587-69kn5 4495700d-1bc5-418a-9332-104410f7c2cf 4324923 0 2020-01-25 21:25:47 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 4041c014-0ca6-4eae-8909-63d28979dfe2 0xc000eb7c60 0xc000eb7c61}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-w2vr6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-w2vr6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-w2vr6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 21:25:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 21:25:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 21:25:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 21:25:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-01-25 21:25:51 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 25 21:25:54.781: INFO: Pod "webserver-deployment-595b5b9587-blxqq" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-blxqq webserver-deployment-595b5b9587- deployment-5265 /api/v1/namespaces/deployment-5265/pods/webserver-deployment-595b5b9587-blxqq fdaaf759-b90f-4418-8b04-9a67615f47b6 4324887 0 2020-01-25 21:25:47 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 4041c014-0ca6-4eae-8909-63d28979dfe2 0xc000eb7e47 0xc000eb7e48}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-w2vr6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-w2vr6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-w2vr6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 21:25:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 25 21:25:54.781: INFO: Pod "webserver-deployment-595b5b9587-cbmmp" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-cbmmp webserver-deployment-595b5b9587- deployment-5265 /api/v1/namespaces/deployment-5265/pods/webserver-deployment-595b5b9587-cbmmp 1143d2f5-521f-4a6a-8241-b96804e393d9 4324886 0 2020-01-25 21:25:47 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 4041c014-0ca6-4eae-8909-63d28979dfe2 0xc004ef2097 0xc004ef2098}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-w2vr6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-w2vr6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-w2vr6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 21:25:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 25 21:25:54.781: INFO: Pod "webserver-deployment-595b5b9587-d27cc" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-d27cc webserver-deployment-595b5b9587- deployment-5265 /api/v1/namespaces/deployment-5265/pods/webserver-deployment-595b5b9587-d27cc 71eda43b-6a56-4fe9-876d-063f9e0358b9 4324764 0 2020-01-25 21:25:13 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 4041c014-0ca6-4eae-8909-63d28979dfe2 0xc004ef2307 0xc004ef2308}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-w2vr6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-w2vr6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-w2vr6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 21:25:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 21:25:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 21:25:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 21:25:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.2,StartTime:2020-01-25 21:25:14 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-25 21:25:39 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://710d16faa95d49ae712dfd291402461b35b671104b94d9519e3b9251d1387d04,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.2,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 25 21:25:54.782: INFO: Pod "webserver-deployment-595b5b9587-d6vj6" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-d6vj6 webserver-deployment-595b5b9587- deployment-5265 /api/v1/namespaces/deployment-5265/pods/webserver-deployment-595b5b9587-d6vj6 0abd85d0-6b5f-4c42-994a-4d390a619ca8 4324759 0 2020-01-25 21:25:13 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 4041c014-0ca6-4eae-8909-63d28979dfe2 0xc004ef2520 0xc004ef2521}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-w2vr6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-w2vr6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-w2vr6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 21:25:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 21:25:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 21:25:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 21:25:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.4,StartTime:2020-01-25 21:25:13 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-25 21:25:40 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://50df63e456cd2fb322b4e42dba594c9c73e004d3366080485a03bf060137d460,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.4,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 25 21:25:54.782: INFO: Pod "webserver-deployment-595b5b9587-ksbzf" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-ksbzf webserver-deployment-595b5b9587- deployment-5265 /api/v1/namespaces/deployment-5265/pods/webserver-deployment-595b5b9587-ksbzf b0c30b58-1f45-4201-b796-c58d792df560 4324768 0 2020-01-25 21:25:13 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 4041c014-0ca6-4eae-8909-63d28979dfe2 0xc004ef2690 0xc004ef2691}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-w2vr6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-w2vr6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-w2vr6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 21:25:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 21:25:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 21:25:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 21:25:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.1,StartTime:2020-01-25 21:25:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-25 21:25:39 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://24f0336abb74dc28f010372c67d461d0c32b1ad1e3a493c0574844bf49822e42,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.1,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 25 21:25:54.783: INFO: Pod "webserver-deployment-595b5b9587-m4sp9" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-m4sp9 webserver-deployment-595b5b9587- deployment-5265 /api/v1/namespaces/deployment-5265/pods/webserver-deployment-595b5b9587-m4sp9 84314b52-131f-42c7-8c9e-1360e43afca5 4324736 0 2020-01-25 21:25:13 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 4041c014-0ca6-4eae-8909-63d28979dfe2 0xc004ef2800 0xc004ef2801}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-w2vr6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-w2vr6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-w2vr6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 21:25:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 21:25:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 21:25:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 21:25:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:10.32.0.8,StartTime:2020-01-25 21:25:13 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-25 21:25:37 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://749a76eade059d5cefab94260c87f97764dcfe01f2c4221028afe4373603be8e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.8,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 25 21:25:54.783: INFO: Pod "webserver-deployment-595b5b9587-mgcqd" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-mgcqd webserver-deployment-595b5b9587- deployment-5265 /api/v1/namespaces/deployment-5265/pods/webserver-deployment-595b5b9587-mgcqd bfc923b4-e5c6-4393-a286-643741cc47ee 4324885 0 2020-01-25 21:25:47 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 4041c014-0ca6-4eae-8909-63d28979dfe2 0xc004ef2960 0xc004ef2961}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-w2vr6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-w2vr6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-w2vr6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 21:25:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 25 21:25:54.783: INFO: Pod "webserver-deployment-595b5b9587-r48bx" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-r48bx webserver-deployment-595b5b9587- deployment-5265 /api/v1/namespaces/deployment-5265/pods/webserver-deployment-595b5b9587-r48bx 2e07e0ef-23bd-41fa-ac58-1562aa1ec289 4324916 0 2020-01-25 21:25:47 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 4041c014-0ca6-4eae-8909-63d28979dfe2 0xc004ef2a67 0xc004ef2a68}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-w2vr6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-w2vr6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-w2vr6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 21:25:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 21:25:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 21:25:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 21:25:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-01-25 21:25:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 25 21:25:54.784: INFO: Pod "webserver-deployment-595b5b9587-rks89" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-rks89 webserver-deployment-595b5b9587- deployment-5265 /api/v1/namespaces/deployment-5265/pods/webserver-deployment-595b5b9587-rks89 ff626781-af06-43b2-995b-fa1ab5306ee9 4324725 0 2020-01-25 21:25:13 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 4041c014-0ca6-4eae-8909-63d28979dfe2 0xc004ef2bc7 0xc004ef2bc8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-w2vr6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-w2vr6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-w2vr6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 21:25:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 21:25:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 21:25:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 21:25:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:10.32.0.6,StartTime:2020-01-25 21:25:13 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-25 21:25:37 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://179b388246a97efb97140b1561e7b22e661d98f0a91a162056ce2601d9913b3b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.6,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 25 21:25:54.784: INFO: Pod "webserver-deployment-595b5b9587-s6dkl" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-s6dkl webserver-deployment-595b5b9587- deployment-5265 /api/v1/namespaces/deployment-5265/pods/webserver-deployment-595b5b9587-s6dkl 2f79baf2-e64d-4c22-90c0-2a74ce270c5e 4324910 0 2020-01-25 21:25:46 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 4041c014-0ca6-4eae-8909-63d28979dfe2 0xc004ef2d40 0xc004ef2d41}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-w2vr6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-w2vr6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-w2vr6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 21:25:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 21:25:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 21:25:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 21:25:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-01-25 21:25:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 25 21:25:54.784: INFO: Pod "webserver-deployment-595b5b9587-sftnx" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-sftnx webserver-deployment-595b5b9587- deployment-5265 /api/v1/namespaces/deployment-5265/pods/webserver-deployment-595b5b9587-sftnx 66f142f4-a4ed-407d-ac4d-c2285d18a226 4324872 0 2020-01-25 21:25:47 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 4041c014-0ca6-4eae-8909-63d28979dfe2 0xc004ef2e97 0xc004ef2e98}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-w2vr6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-w2vr6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-w2vr6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 21:25:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 25 21:25:54.784: INFO: Pod "webserver-deployment-595b5b9587-t4dxb" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-t4dxb webserver-deployment-595b5b9587- deployment-5265 /api/v1/namespaces/deployment-5265/pods/webserver-deployment-595b5b9587-t4dxb 8f43a3aa-9ea7-4899-b07b-453db3b1c5bb 4324733 0 2020-01-25 21:25:13 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 4041c014-0ca6-4eae-8909-63d28979dfe2 0xc004ef2fb7 0xc004ef2fb8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-w2vr6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-w2vr6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-w2vr6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 21:25:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 21:25:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 21:25:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 21:25:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:10.32.0.4,StartTime:2020-01-25 21:25:13 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-25 21:25:34 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://105ba241a6bb5d170d79074ccf0d4ae052e3da51da6a46eaec6eff6ac846b85e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.4,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 25 21:25:54.785: INFO: Pod "webserver-deployment-595b5b9587-tdw5s" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-tdw5s webserver-deployment-595b5b9587- deployment-5265 /api/v1/namespaces/deployment-5265/pods/webserver-deployment-595b5b9587-tdw5s 853dba64-8e28-403b-b00a-3d77ad61e1b5 4324889 0 2020-01-25 21:25:47 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 4041c014-0ca6-4eae-8909-63d28979dfe2 0xc004ef3120 0xc004ef3121}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-w2vr6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-w2vr6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-w2vr6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 21:25:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 25 21:25:54.785: INFO: Pod "webserver-deployment-595b5b9587-x2mz8" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-x2mz8 webserver-deployment-595b5b9587- deployment-5265 /api/v1/namespaces/deployment-5265/pods/webserver-deployment-595b5b9587-x2mz8 e4d5d350-aeaa-43d4-b141-01999800b322 4324920 0 2020-01-25 21:25:47 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 4041c014-0ca6-4eae-8909-63d28979dfe2 0xc004ef3227 0xc004ef3228}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-w2vr6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-w2vr6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-w2vr6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 21:25:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 21:25:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 21:25:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 21:25:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-01-25 21:25:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 25 21:25:54.786: INFO: Pod "webserver-deployment-595b5b9587-xqkfr" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-xqkfr webserver-deployment-595b5b9587- deployment-5265 /api/v1/namespaces/deployment-5265/pods/webserver-deployment-595b5b9587-xqkfr c82436d7-a12c-4625-a34d-298cc6fd962e 4324871 0 2020-01-25 21:25:47 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 4041c014-0ca6-4eae-8909-63d28979dfe2 0xc004ef3377 0xc004ef3378}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-w2vr6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-w2vr6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-w2vr6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 21:25:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 25 21:25:54.786: INFO: Pod "webserver-deployment-595b5b9587-zxfqq" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-zxfqq webserver-deployment-595b5b9587- deployment-5265 /api/v1/namespaces/deployment-5265/pods/webserver-deployment-595b5b9587-zxfqq 5db96a8c-4e90-4015-8cba-dd1e742ceeb6 4324888 0 2020-01-25 21:25:47 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 4041c014-0ca6-4eae-8909-63d28979dfe2 0xc004ef3497 0xc004ef3498}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-w2vr6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-w2vr6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-w2vr6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 21:25:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 25 21:25:54.786: INFO: Pod "webserver-deployment-c7997dcc8-2k7zs" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-2k7zs webserver-deployment-c7997dcc8- deployment-5265 /api/v1/namespaces/deployment-5265/pods/webserver-deployment-c7997dcc8-2k7zs a01669ef-8382-4981-8579-a595f726f588 4324879 0 2020-01-25 21:25:47 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 f85a2b5e-6bae-4d5b-9c1e-2c967f2917dd 0xc004ef35b7 0xc004ef35b8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-w2vr6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-w2vr6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-w2vr6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 21:25:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 25 21:25:54.787: INFO: Pod "webserver-deployment-c7997dcc8-8x6pw" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-8x6pw webserver-deployment-c7997dcc8- deployment-5265 /api/v1/namespaces/deployment-5265/pods/webserver-deployment-c7997dcc8-8x6pw c15ddbd2-087f-45d7-bc1c-5c7a1647429d 4324874 0 2020-01-25 21:25:47 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 f85a2b5e-6bae-4d5b-9c1e-2c967f2917dd 0xc004ef36e7 0xc004ef36e8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-w2vr6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-w2vr6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-w2vr6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 21:25:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 25 21:25:54.787: INFO: Pod "webserver-deployment-c7997dcc8-cjktl" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-cjktl webserver-deployment-c7997dcc8- deployment-5265 /api/v1/namespaces/deployment-5265/pods/webserver-deployment-c7997dcc8-cjktl e9d1f8fd-eaad-4d1e-990c-53b195ec031a 4324900 0 2020-01-25 21:25:47 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 f85a2b5e-6bae-4d5b-9c1e-2c967f2917dd 0xc004ef3807 0xc004ef3808}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-w2vr6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-w2vr6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-w2vr6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 21:25:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 21:25:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 21:25:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 21:25:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-01-25 21:25:47 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 25 21:25:54.788: INFO: Pod "webserver-deployment-c7997dcc8-djrzv" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-djrzv webserver-deployment-c7997dcc8- deployment-5265 /api/v1/namespaces/deployment-5265/pods/webserver-deployment-c7997dcc8-djrzv 852ec2c7-d527-4856-bac7-ce852d9714e6 4324829 0 2020-01-25 21:25:43 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 f85a2b5e-6bae-4d5b-9c1e-2c967f2917dd 0xc004ef3977 0xc004ef3978}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-w2vr6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-w2vr6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-w2vr6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 21:25:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 21:25:44 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 21:25:44 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 21:25:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-01-25 21:25:44 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 25 21:25:54.788: INFO: Pod "webserver-deployment-c7997dcc8-f796w" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-f796w webserver-deployment-c7997dcc8- deployment-5265 /api/v1/namespaces/deployment-5265/pods/webserver-deployment-c7997dcc8-f796w fe6b7aa5-eecc-4c6b-a84c-04ec71b8653c 4324876 0 2020-01-25 21:25:47 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 f85a2b5e-6bae-4d5b-9c1e-2c967f2917dd 0xc004ef3ae7 0xc004ef3ae8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-w2vr6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-w2vr6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-w2vr6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 21:25:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 25 21:25:54.788: INFO: Pod "webserver-deployment-c7997dcc8-k9sm2" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-k9sm2 webserver-deployment-c7997dcc8- deployment-5265 /api/v1/namespaces/deployment-5265/pods/webserver-deployment-c7997dcc8-k9sm2 82358bba-410e-4710-a829-e1aa7844e795 4324878 0 2020-01-25 21:25:47 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 f85a2b5e-6bae-4d5b-9c1e-2c967f2917dd 0xc004ef3c17 0xc004ef3c18}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-w2vr6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-w2vr6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-w2vr6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 21:25:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 25 21:25:54.788: INFO: Pod "webserver-deployment-c7997dcc8-kjv4p" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-kjv4p webserver-deployment-c7997dcc8- deployment-5265 /api/v1/namespaces/deployment-5265/pods/webserver-deployment-c7997dcc8-kjv4p 8fb3e8ef-9677-4199-917a-83e5e0d9d0a3 4324898 0 2020-01-25 21:25:47 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 f85a2b5e-6bae-4d5b-9c1e-2c967f2917dd 0xc004ef3d47 0xc004ef3d48}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-w2vr6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-w2vr6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-w2vr6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 21:25:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 25 21:25:54.789: INFO: Pod "webserver-deployment-c7997dcc8-kng8x" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-kng8x webserver-deployment-c7997dcc8- deployment-5265 /api/v1/namespaces/deployment-5265/pods/webserver-deployment-c7997dcc8-kng8x db098922-db59-4648-a7ec-2a54106dfc9c 4324877 0 2020-01-25 21:25:47 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 f85a2b5e-6bae-4d5b-9c1e-2c967f2917dd 0xc004ef3e67 0xc004ef3e68}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-w2vr6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-w2vr6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-w2vr6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 21:25:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 25 21:25:54.789: INFO: Pod "webserver-deployment-c7997dcc8-lsc4l" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-lsc4l webserver-deployment-c7997dcc8- deployment-5265 /api/v1/namespaces/deployment-5265/pods/webserver-deployment-c7997dcc8-lsc4l 5dd1b980-7a3d-4e9a-b255-f540f980c62d 4324798 0 2020-01-25 21:25:42 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 f85a2b5e-6bae-4d5b-9c1e-2c967f2917dd 0xc004ef3f97 0xc004ef3f98}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-w2vr6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-w2vr6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-w2vr6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 21:25:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 21:25:42 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 21:25:42 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 21:25:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-01-25 21:25:42 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 25 21:25:54.789: INFO: Pod "webserver-deployment-c7997dcc8-ltv9r" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-ltv9r webserver-deployment-c7997dcc8- deployment-5265 /api/v1/namespaces/deployment-5265/pods/webserver-deployment-c7997dcc8-ltv9r 7c71d532-25e1-4824-8413-e45c6a7875e7 4324881 0 2020-01-25 21:25:47 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 f85a2b5e-6bae-4d5b-9c1e-2c967f2917dd 0xc004e32117 0xc004e32118}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-w2vr6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-w2vr6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-w2vr6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 21:25:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 25 21:25:54.789: INFO: Pod "webserver-deployment-c7997dcc8-qglcv" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-qglcv webserver-deployment-c7997dcc8- deployment-5265 /api/v1/namespaces/deployment-5265/pods/webserver-deployment-c7997dcc8-qglcv c58a50df-fdc9-4c7f-ac76-a11027cb9eaa 4324806 0 2020-01-25 21:25:42 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 f85a2b5e-6bae-4d5b-9c1e-2c967f2917dd 0xc004e32267 0xc004e32268}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-w2vr6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-w2vr6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-w2vr6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 21:25:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 21:25:42 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 21:25:42 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 21:25:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-01-25 21:25:42 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 25 21:25:54.790: INFO: Pod "webserver-deployment-c7997dcc8-rgj5m" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-rgj5m webserver-deployment-c7997dcc8- deployment-5265 /api/v1/namespaces/deployment-5265/pods/webserver-deployment-c7997dcc8-rgj5m b57f9502-d8d9-45c9-bbc4-b85ecc6021d1 4324800 0 2020-01-25 21:25:42 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 f85a2b5e-6bae-4d5b-9c1e-2c967f2917dd 0xc004e323e7 0xc004e323e8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-w2vr6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-w2vr6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-w2vr6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 21:25:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 21:25:42 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 21:25:42 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 21:25:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-01-25 21:25:42 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 25 21:25:54.790: INFO: Pod "webserver-deployment-c7997dcc8-vpgpx" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-vpgpx webserver-deployment-c7997dcc8- deployment-5265 /api/v1/namespaces/deployment-5265/pods/webserver-deployment-c7997dcc8-vpgpx 660a15af-d8df-4a9e-81a9-e710046905e0 4324833 0 2020-01-25 21:25:44 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 f85a2b5e-6bae-4d5b-9c1e-2c967f2917dd 0xc004e32557 0xc004e32558}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-w2vr6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-w2vr6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-w2vr6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 21:25:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 21:25:45 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 21:25:45 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 21:25:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-01-25 21:25:45 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 25 21:25:54.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5265" for this suite. • [SLOW TEST:43.924 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":278,"completed":36,"skipped":428,"failed":0} SSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 25 21:25:57.178: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 25 21:26:21.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7837" for this suite. • [SLOW TEST:24.427 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":278,"completed":37,"skipped":433,"failed":0} SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 25 21:26:21.606: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs Jan 25 21:26:23.102: INFO: Waiting up to 5m0s for pod "pod-03a2ff0f-0ed8-4703-8601-734c54018152" in namespace "emptydir-9633" to be "success or failure" Jan 25 21:26:23.143: INFO: Pod "pod-03a2ff0f-0ed8-4703-8601-734c54018152": Phase="Pending", Reason="", readiness=false. Elapsed: 40.787952ms Jan 25 21:26:25.622: INFO: Pod "pod-03a2ff0f-0ed8-4703-8601-734c54018152": Phase="Pending", Reason="", readiness=false. Elapsed: 2.519851673s Jan 25 21:26:28.154: INFO: Pod "pod-03a2ff0f-0ed8-4703-8601-734c54018152": Phase="Pending", Reason="", readiness=false. Elapsed: 5.051392203s Jan 25 21:26:30.364: INFO: Pod "pod-03a2ff0f-0ed8-4703-8601-734c54018152": Phase="Pending", Reason="", readiness=false. Elapsed: 7.261150832s Jan 25 21:26:32.640: INFO: Pod "pod-03a2ff0f-0ed8-4703-8601-734c54018152": Phase="Pending", Reason="", readiness=false. Elapsed: 9.537663976s Jan 25 21:26:34.656: INFO: Pod "pod-03a2ff0f-0ed8-4703-8601-734c54018152": Phase="Pending", Reason="", readiness=false. Elapsed: 11.553917378s Jan 25 21:26:36.676: INFO: Pod "pod-03a2ff0f-0ed8-4703-8601-734c54018152": Phase="Pending", Reason="", readiness=false. Elapsed: 13.573929549s Jan 25 21:26:38.686: INFO: Pod "pod-03a2ff0f-0ed8-4703-8601-734c54018152": Phase="Pending", Reason="", readiness=false. Elapsed: 15.583860822s Jan 25 21:26:40.692: INFO: Pod "pod-03a2ff0f-0ed8-4703-8601-734c54018152": Phase="Pending", Reason="", readiness=false. Elapsed: 17.58932841s Jan 25 21:26:42.704: INFO: Pod "pod-03a2ff0f-0ed8-4703-8601-734c54018152": Phase="Pending", Reason="", readiness=false. Elapsed: 19.60125406s Jan 25 21:26:44.710: INFO: Pod "pod-03a2ff0f-0ed8-4703-8601-734c54018152": Phase="Pending", Reason="", readiness=false. Elapsed: 21.607454235s Jan 25 21:26:46.716: INFO: Pod "pod-03a2ff0f-0ed8-4703-8601-734c54018152": Phase="Pending", Reason="", readiness=false. Elapsed: 23.613942036s Jan 25 21:26:48.736: INFO: Pod "pod-03a2ff0f-0ed8-4703-8601-734c54018152": Phase="Pending", Reason="", readiness=false. Elapsed: 25.633829807s Jan 25 21:26:50.747: INFO: Pod "pod-03a2ff0f-0ed8-4703-8601-734c54018152": Phase="Pending", Reason="", readiness=false. Elapsed: 27.644915089s Jan 25 21:26:52.757: INFO: Pod "pod-03a2ff0f-0ed8-4703-8601-734c54018152": Phase="Succeeded", Reason="", readiness=false. Elapsed: 29.654073968s STEP: Saw pod success Jan 25 21:26:52.757: INFO: Pod "pod-03a2ff0f-0ed8-4703-8601-734c54018152" satisfied condition "success or failure" Jan 25 21:26:52.768: INFO: Trying to get logs from node jerma-node pod pod-03a2ff0f-0ed8-4703-8601-734c54018152 container test-container: STEP: delete the pod Jan 25 21:26:53.028: INFO: Waiting for pod pod-03a2ff0f-0ed8-4703-8601-734c54018152 to disappear Jan 25 21:26:53.048: INFO: Pod pod-03a2ff0f-0ed8-4703-8601-734c54018152 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 25 21:26:53.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9633" for this suite. • [SLOW TEST:31.522 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":38,"skipped":440,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 25 21:26:53.129: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:178 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 25 21:26:53.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5197" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":278,"completed":39,"skipped":459,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 25 21:26:53.343: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Jan 25 21:27:06.217: INFO: Successfully updated pod "labelsupdateebdec909-77da-443b-8cee-c69f415ffee3" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 25 21:27:08.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4266" for this suite. • [SLOW TEST:14.965 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":40,"skipped":497,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 25 21:27:08.311: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating all guestbook components Jan 25 21:27:08.423: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-slave labels: app: agnhost role: slave tier: backend spec: ports: - port: 6379 selector: app: agnhost role: slave tier: backend Jan 25 21:27:08.423: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8804' Jan 25 21:27:08.999: INFO: stderr: "" Jan 25 21:27:09.000: INFO: stdout: "service/agnhost-slave created\n" Jan 25 21:27:09.000: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-master labels: app: agnhost role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: master tier: backend Jan 25 21:27:09.001: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8804' Jan 25 21:27:09.483: INFO: stderr: "" Jan 25 21:27:09.484: INFO: stdout: "service/agnhost-master created\n" Jan 25 21:27:09.485: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Jan 25 21:27:09.485: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8804' Jan 25 21:27:09.948: INFO: stderr: "" Jan 25 21:27:09.949: INFO: stdout: "service/frontend created\n" Jan 25 21:27:09.950: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Jan 25 21:27:09.951: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8804' Jan 25 21:27:10.406: INFO: stderr: "" Jan 25 21:27:10.407: INFO: stdout: "deployment.apps/frontend created\n" Jan 25 21:27:10.408: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-master spec: replicas: 1 selector: matchLabels: app: agnhost role: master tier: backend template: metadata: labels: app: agnhost role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Jan 25 21:27:10.408: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8804' Jan 25 21:27:10.989: INFO: stderr: "" Jan 25 21:27:10.989: INFO: stdout: "deployment.apps/agnhost-master created\n" Jan 25 21:27:10.989: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-slave spec: replicas: 2 selector: matchLabels: app: agnhost role: slave tier: backend template: metadata: labels: app: agnhost role: slave tier: backend spec: containers: - name: slave image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Jan 25 21:27:10.990: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8804' Jan 25 21:27:12.249: INFO: stderr: "" Jan 25 21:27:12.249: INFO: stdout: "deployment.apps/agnhost-slave created\n" STEP: validating guestbook app Jan 25 21:27:12.250: INFO: Waiting for all frontend pods to be Running. Jan 25 21:27:37.302: INFO: Waiting for frontend to serve content. Jan 25 21:27:37.358: INFO: Trying to add a new entry to the guestbook. Jan 25 21:27:37.375: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused Jan 25 21:27:42.388: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused Jan 25 21:27:47.408: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused Jan 25 21:27:52.422: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused Jan 25 21:27:57.446: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused Jan 25 21:28:02.468: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused Jan 25 21:28:07.486: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused Jan 25 21:28:12.505: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused Jan 25 21:28:17.536: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused Jan 25 21:28:22.564: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused Jan 25 21:28:27.584: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused Jan 25 21:28:32.622: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused Jan 25 21:28:37.658: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused Jan 25 21:28:42.695: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused Jan 25 21:28:47.717: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused Jan 25 21:28:52.735: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused Jan 25 21:28:57.762: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused Jan 25 21:29:02.786: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused Jan 25 21:29:07.811: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused Jan 25 21:29:12.834: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused Jan 25 21:29:17.867: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused Jan 25 21:29:22.891: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused Jan 25 21:29:27.908: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused Jan 25 21:29:32.937: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused Jan 25 21:29:37.959: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused Jan 25 21:29:42.979: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused Jan 25 21:29:47.990: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused Jan 25 21:29:53.005: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused Jan 25 21:29:58.024: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused Jan 25 21:30:03.046: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused Jan 25 21:30:08.069: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused Jan 25 21:30:13.091: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused Jan 25 21:30:18.113: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused Jan 25 21:30:23.133: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused Jan 25 21:30:28.153: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused Jan 25 21:30:33.171: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused Jan 25 21:30:38.173: FAIL: Cannot added new entry in 180 seconds. Full Stack Trace k8s.io/kubernetes/test/e2e/kubectl.validateGuestbookApp(0x5424e60, 0xc000362840, 0xc002fb9e30, 0xc) /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:2315 +0x551 k8s.io/kubernetes/test/e2e/kubectl.glob..func2.7.2() /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:417 +0x165 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001ccce00) _output/local/go/src/k8s.io/kubernetes/test/e2e/e2e.go:110 +0x30a k8s.io/kubernetes/test/e2e.TestE2E(0xc001ccce00) _output/local/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:112 +0x2b testing.tRunner(0xc001ccce00, 0x4c30de8) /usr/local/go/src/testing/testing.go:909 +0xc9 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:960 +0x350 STEP: using delete to clean up resources Jan 25 21:30:38.177: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8804' Jan 25 21:30:41.047: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 25 21:30:41.047: INFO: stdout: "service \"agnhost-slave\" force deleted\n" STEP: using delete to clean up resources Jan 25 21:30:41.047: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8804' Jan 25 21:30:41.259: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 25 21:30:41.259: INFO: stdout: "service \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Jan 25 21:30:41.260: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8804' Jan 25 21:30:41.440: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 25 21:30:41.440: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Jan 25 21:30:41.441: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8804' Jan 25 21:30:41.601: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 25 21:30:41.601: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Jan 25 21:30:41.602: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8804' Jan 25 21:30:41.754: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 25 21:30:41.754: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Jan 25 21:30:41.754: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8804' Jan 25 21:30:41.961: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 25 21:30:41.961: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 STEP: Collecting events from namespace "kubectl-8804". STEP: Found 37 events. Jan 25 21:30:42.027: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for agnhost-master-74c46fb7d4-xvz6j: {default-scheduler } Scheduled: Successfully assigned kubectl-8804/agnhost-master-74c46fb7d4-xvz6j to jerma-node Jan 25 21:30:42.027: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for agnhost-slave-774cfc759f-2nkm7: {default-scheduler } Scheduled: Successfully assigned kubectl-8804/agnhost-slave-774cfc759f-2nkm7 to jerma-server-mvvl6gufaqub Jan 25 21:30:42.027: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for agnhost-slave-774cfc759f-mgvnt: {default-scheduler } Scheduled: Successfully assigned kubectl-8804/agnhost-slave-774cfc759f-mgvnt to jerma-node Jan 25 21:30:42.027: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for frontend-6c5f89d5d4-nxtsf: {default-scheduler } Scheduled: Successfully assigned kubectl-8804/frontend-6c5f89d5d4-nxtsf to jerma-node Jan 25 21:30:42.027: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for frontend-6c5f89d5d4-q26nb: {default-scheduler } Scheduled: Successfully assigned kubectl-8804/frontend-6c5f89d5d4-q26nb to jerma-node Jan 25 21:30:42.027: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for frontend-6c5f89d5d4-rkgl4: {default-scheduler } Scheduled: Successfully assigned kubectl-8804/frontend-6c5f89d5d4-rkgl4 to jerma-server-mvvl6gufaqub Jan 25 21:30:42.027: INFO: At 2020-01-25 21:27:10 +0000 UTC - event for frontend: {deployment-controller } ScalingReplicaSet: Scaled up replica set frontend-6c5f89d5d4 to 3 Jan 25 21:30:42.027: INFO: At 2020-01-25 21:27:10 +0000 UTC - event for frontend-6c5f89d5d4: {replicaset-controller } SuccessfulCreate: Created pod: frontend-6c5f89d5d4-nxtsf Jan 25 21:30:42.027: INFO: At 2020-01-25 21:27:10 +0000 UTC - event for frontend-6c5f89d5d4: {replicaset-controller } SuccessfulCreate: Created pod: frontend-6c5f89d5d4-rkgl4 Jan 25 21:30:42.027: INFO: At 2020-01-25 21:27:10 +0000 UTC - event for frontend-6c5f89d5d4: {replicaset-controller } SuccessfulCreate: Created pod: frontend-6c5f89d5d4-q26nb Jan 25 21:30:42.027: INFO: At 2020-01-25 21:27:12 +0000 UTC - event for agnhost-master: {deployment-controller } ScalingReplicaSet: Scaled up replica set agnhost-master-74c46fb7d4 to 1 Jan 25 21:30:42.027: INFO: At 2020-01-25 21:27:12 +0000 UTC - event for agnhost-master-74c46fb7d4: {replicaset-controller } SuccessfulCreate: Created pod: agnhost-master-74c46fb7d4-xvz6j Jan 25 21:30:42.027: INFO: At 2020-01-25 21:27:12 +0000 UTC - event for agnhost-slave: {deployment-controller } ScalingReplicaSet: Scaled up replica set agnhost-slave-774cfc759f to 2 Jan 25 21:30:42.027: INFO: At 2020-01-25 21:27:12 +0000 UTC - event for agnhost-slave-774cfc759f: {replicaset-controller } SuccessfulCreate: Created pod: agnhost-slave-774cfc759f-2nkm7 Jan 25 21:30:42.027: INFO: At 2020-01-25 21:27:12 +0000 UTC - event for agnhost-slave-774cfc759f: {replicaset-controller } SuccessfulCreate: Created pod: agnhost-slave-774cfc759f-mgvnt Jan 25 21:30:42.027: INFO: At 2020-01-25 21:27:18 +0000 UTC - event for frontend-6c5f89d5d4-rkgl4: {kubelet jerma-server-mvvl6gufaqub} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine Jan 25 21:30:42.027: INFO: At 2020-01-25 21:27:20 +0000 UTC - event for agnhost-slave-774cfc759f-2nkm7: {kubelet jerma-server-mvvl6gufaqub} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine Jan 25 21:30:42.027: INFO: At 2020-01-25 21:27:22 +0000 UTC - event for frontend-6c5f89d5d4-rkgl4: {kubelet jerma-server-mvvl6gufaqub} Created: Created container guestbook-frontend Jan 25 21:30:42.027: INFO: At 2020-01-25 21:27:23 +0000 UTC - event for agnhost-slave-774cfc759f-2nkm7: {kubelet jerma-server-mvvl6gufaqub} Created: Created container slave Jan 25 21:30:42.027: INFO: At 2020-01-25 21:27:23 +0000 UTC - event for frontend-6c5f89d5d4-nxtsf: {kubelet jerma-node} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine Jan 25 21:30:42.027: INFO: At 2020-01-25 21:27:24 +0000 UTC - event for agnhost-slave-774cfc759f-2nkm7: {kubelet jerma-server-mvvl6gufaqub} Started: Started container slave Jan 25 21:30:42.027: INFO: At 2020-01-25 21:27:24 +0000 UTC - event for frontend-6c5f89d5d4-q26nb: {kubelet jerma-node} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine Jan 25 21:30:42.027: INFO: At 2020-01-25 21:27:24 +0000 UTC - event for frontend-6c5f89d5d4-rkgl4: {kubelet jerma-server-mvvl6gufaqub} Started: Started container guestbook-frontend Jan 25 21:30:42.027: INFO: At 2020-01-25 21:27:26 +0000 UTC - event for agnhost-slave-774cfc759f-mgvnt: {kubelet jerma-node} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine Jan 25 21:30:42.027: INFO: At 2020-01-25 21:27:28 +0000 UTC - event for agnhost-master-74c46fb7d4-xvz6j: {kubelet jerma-node} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine Jan 25 21:30:42.027: INFO: At 2020-01-25 21:27:30 +0000 UTC - event for agnhost-master-74c46fb7d4-xvz6j: {kubelet jerma-node} Created: Created container master Jan 25 21:30:42.027: INFO: At 2020-01-25 21:27:30 +0000 UTC - event for agnhost-slave-774cfc759f-mgvnt: {kubelet jerma-node} Created: Created container slave Jan 25 21:30:42.027: INFO: At 2020-01-25 21:27:30 +0000 UTC - event for frontend-6c5f89d5d4-nxtsf: {kubelet jerma-node} Created: Created container guestbook-frontend Jan 25 21:30:42.027: INFO: At 2020-01-25 21:27:30 +0000 UTC - event for frontend-6c5f89d5d4-q26nb: {kubelet jerma-node} Created: Created container guestbook-frontend Jan 25 21:30:42.027: INFO: At 2020-01-25 21:27:31 +0000 UTC - event for agnhost-master-74c46fb7d4-xvz6j: {kubelet jerma-node} Started: Started container master Jan 25 21:30:42.027: INFO: At 2020-01-25 21:27:31 +0000 UTC - event for agnhost-slave-774cfc759f-mgvnt: {kubelet jerma-node} Started: Started container slave Jan 25 21:30:42.027: INFO: At 2020-01-25 21:27:31 +0000 UTC - event for frontend-6c5f89d5d4-nxtsf: {kubelet jerma-node} Started: Started container guestbook-frontend Jan 25 21:30:42.027: INFO: At 2020-01-25 21:27:31 +0000 UTC - event for frontend-6c5f89d5d4-q26nb: {kubelet jerma-node} Started: Started container guestbook-frontend Jan 25 21:30:42.027: INFO: At 2020-01-25 21:30:41 +0000 UTC - event for agnhost-master-74c46fb7d4-xvz6j: {kubelet jerma-node} Killing: Stopping container master Jan 25 21:30:42.027: INFO: At 2020-01-25 21:30:41 +0000 UTC - event for frontend-6c5f89d5d4-nxtsf: {kubelet jerma-node} Killing: Stopping container guestbook-frontend Jan 25 21:30:42.027: INFO: At 2020-01-25 21:30:41 +0000 UTC - event for frontend-6c5f89d5d4-q26nb: {kubelet jerma-node} Killing: Stopping container guestbook-frontend Jan 25 21:30:42.027: INFO: At 2020-01-25 21:30:41 +0000 UTC - event for frontend-6c5f89d5d4-rkgl4: {kubelet jerma-server-mvvl6gufaqub} Killing: Stopping container guestbook-frontend Jan 25 21:30:42.047: INFO: POD NODE PHASE GRACE CONDITIONS Jan 25 21:30:42.048: INFO: agnhost-master-74c46fb7d4-xvz6j jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 21:27:13 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 21:27:32 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 21:27:32 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 21:27:12 +0000 UTC }] Jan 25 21:30:42.048: INFO: agnhost-slave-774cfc759f-2nkm7 jerma-server-mvvl6gufaqub Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 21:27:13 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 21:27:25 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 21:27:25 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 21:27:12 +0000 UTC }] Jan 25 21:30:42.048: INFO: agnhost-slave-774cfc759f-mgvnt jerma-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 21:27:13 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 21:27:32 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 21:27:32 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 21:27:12 +0000 UTC }] Jan 25 21:30:42.048: INFO: frontend-6c5f89d5d4-nxtsf jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 21:27:10 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 21:27:32 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 21:27:32 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 21:27:10 +0000 UTC }] Jan 25 21:30:42.048: INFO: frontend-6c5f89d5d4-q26nb jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 21:27:10 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 21:27:32 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 21:27:32 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 21:27:10 +0000 UTC }] Jan 25 21:30:42.048: INFO: frontend-6c5f89d5d4-rkgl4 jerma-server-mvvl6gufaqub Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 21:27:10 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 21:27:25 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 21:27:25 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 21:27:10 +0000 UTC }] Jan 25 21:30:42.048: INFO: Jan 25 21:30:42.100: INFO: Logging node info for node jerma-node Jan 25 21:30:42.117: INFO: Node Info: &Node{ObjectMeta:{jerma-node /api/v1/nodes/jerma-node 6236bfb4-6b64-4c0a-82c6-f768ceeab07c 4325734 0 2020-01-04 11:59:52 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:jerma-node kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{20629221376 0} {} 20145724Ki BinarySI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4136013824 0} {} 4039076Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{18566299208 0} {} 18566299208 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4031156224 0} {} 3936676Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-04 12:00:49 +0000 UTC,LastTransitionTime:2020-01-04 12:00:49 +0000 UTC,Reason:WeaveIsUp,Message:Weave pod has set this,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-25 21:29:18 +0000 UTC,LastTransitionTime:2020-01-04 11:59:52 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-25 21:29:18 +0000 UTC,LastTransitionTime:2020-01-04 11:59:52 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-25 21:29:18 +0000 UTC,LastTransitionTime:2020-01-04 11:59:52 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-25 21:29:18 +0000 UTC,LastTransitionTime:2020-01-04 12:00:52 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.96.2.250,},NodeAddress{Type:Hostname,Address:jerma-node,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:bdc16344252549dd902c3a5d68b22f41,SystemUUID:BDC16344-2525-49DD-902C-3A5D68B22F41,BootID:eec61fc4-8bf6-487f-8f93-ea9731fe757a,KernelVersion:4.15.0-52-generic,OSImage:Ubuntu 18.04.2 LTS,ContainerRuntimeVersion:docker://18.9.7,KubeletVersion:v1.17.0,KubeProxyVersion:v1.17.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4afb99b4690b418ffc2ceb67e1a17376457e441c1f09ab55447f0aaf992fa646 k8s.gcr.io/etcd:3.4.3],SizeBytes:288426917,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:b2ba9441af30261465e5c41be63e462d0050b09ad280001ae731f399b2b00b75 k8s.gcr.io/kube-proxy:v1.17.0],SizeBytes:115960823,},ContainerImage{Names:[weaveworks/weave-kube@sha256:e4a3a5b9bf605a7ff5ad5473c7493d7e30cbd1ed14c9c2630a4e409b4dbfab1c weaveworks/weave-kube:2.6.0],SizeBytes:114348932,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:1bafcc6fb1aa990b487850adba9cadc020e42d7905aa8a30481182a477ba24b0 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.10],SizeBytes:61365829,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:60684726,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[weaveworks/weave-npc@sha256:985de9ff201677a85ce78703c515466fe45c9c73da6ee21821e89d902c21daf8 weaveworks/weave-npc:2.6.0],SizeBytes:34949961,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1],SizeBytes:9349974,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[appropriate/curl@sha256:c8bf5bbec6397465a247c2bb3e589bb77e4f62ff88a027175ecb2d9e4f12c9d7 appropriate/curl:latest],SizeBytes:5496756,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0],SizeBytes:4747037,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:4732240,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:1450451,},ContainerImage{Names:[busybox@sha256:6915be4043561d64e0ab0f8f098dc2ac48e077fe23f488ac24b665166898115a busybox:latest],SizeBytes:1219782,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:742472,},ContainerImage{Names:[kubernetes/pause@sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 kubernetes/pause:latest],SizeBytes:239840,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 25 21:30:42.120: INFO: Logging kubelet events for node jerma-node Jan 25 21:30:42.127: INFO: Logging pods the kubelet thinks is on node jerma-node Jan 25 21:30:42.239: INFO: kube-proxy-dsf66 started at 2020-01-04 11:59:52 +0000 UTC (0+1 container statuses recorded) Jan 25 21:30:42.239: INFO: Container kube-proxy ready: true, restart count 0 Jan 25 21:30:42.240: INFO: agnhost-master-74c46fb7d4-xvz6j started at 2020-01-25 21:27:13 +0000 UTC (0+1 container statuses recorded) Jan 25 21:30:42.240: INFO: Container master ready: true, restart count 0 Jan 25 21:30:42.240: INFO: weave-net-kz8lv started at 2020-01-04 11:59:52 +0000 UTC (0+2 container statuses recorded) Jan 25 21:30:42.240: INFO: Container weave ready: true, restart count 1 Jan 25 21:30:42.240: INFO: Container weave-npc ready: true, restart count 0 Jan 25 21:30:42.240: INFO: frontend-6c5f89d5d4-nxtsf started at 2020-01-25 21:27:10 +0000 UTC (0+1 container statuses recorded) Jan 25 21:30:42.240: INFO: Container guestbook-frontend ready: true, restart count 0 Jan 25 21:30:42.240: INFO: frontend-6c5f89d5d4-q26nb started at 2020-01-25 21:27:10 +0000 UTC (0+1 container statuses recorded) Jan 25 21:30:42.240: INFO: Container guestbook-frontend ready: true, restart count 0 Jan 25 21:30:42.240: INFO: agnhost-slave-774cfc759f-mgvnt started at 2020-01-25 21:27:13 +0000 UTC (0+1 container statuses recorded) Jan 25 21:30:42.240: INFO: Container slave ready: true, restart count 0 W0125 21:30:42.253277 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 25 21:30:42.340: INFO: Latency metrics for node jerma-node Jan 25 21:30:42.340: INFO: Logging node info for node jerma-server-mvvl6gufaqub Jan 25 21:30:42.354: INFO: Node Info: &Node{ObjectMeta:{jerma-server-mvvl6gufaqub /api/v1/nodes/jerma-server-mvvl6gufaqub a2a7fe9b-7d59-43f1-bbe3-2a69f99cabd2 4325853 0 2020-01-04 11:47:40 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:jerma-server-mvvl6gufaqub kubernetes.io/os:linux node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{20629221376 0} {} 20145724Ki BinarySI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4136013824 0} {} 4039076Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{18566299208 0} {} 18566299208 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4031156224 0} {} 3936676Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-04 11:48:36 +0000 UTC,LastTransitionTime:2020-01-04 11:48:36 +0000 UTC,Reason:WeaveIsUp,Message:Weave pod has set this,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-25 21:30:10 +0000 UTC,LastTransitionTime:2020-01-04 11:47:36 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-25 21:30:10 +0000 UTC,LastTransitionTime:2020-01-04 11:47:36 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-25 21:30:10 +0000 UTC,LastTransitionTime:2020-01-04 11:47:36 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-25 21:30:10 +0000 UTC,LastTransitionTime:2020-01-04 11:48:44 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.96.1.234,},NodeAddress{Type:Hostname,Address:jerma-server-mvvl6gufaqub,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:3f0346566ad342efb0c9f55677d0a8ea,SystemUUID:3F034656-6AD3-42EF-B0C9-F55677D0A8EA,BootID:87dae5d0-e99d-4d31-a4e7-fbd07d84e951,KernelVersion:4.15.0-52-generic,OSImage:Ubuntu 18.04.2 LTS,ContainerRuntimeVersion:docker://18.9.7,KubeletVersion:v1.17.0,KubeProxyVersion:v1.17.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4afb99b4690b418ffc2ceb67e1a17376457e441c1f09ab55447f0aaf992fa646 k8s.gcr.io/etcd:3.4.3-0],SizeBytes:288426917,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:e3ec33d533257902ad9ebe3d399c17710e62009201a7202aec941e351545d662 k8s.gcr.io/kube-apiserver:v1.17.0],SizeBytes:170957331,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:0438efb5098a2ca634ea8c6b0d804742b733d0d13fd53cf62c73e32c659a3c39 k8s.gcr.io/kube-controller-manager:v1.17.0],SizeBytes:160877075,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:b2ba9441af30261465e5c41be63e462d0050b09ad280001ae731f399b2b00b75 k8s.gcr.io/kube-proxy:v1.17.0],SizeBytes:115960823,},ContainerImage{Names:[weaveworks/weave-kube@sha256:e4a3a5b9bf605a7ff5ad5473c7493d7e30cbd1ed14c9c2630a4e409b4dbfab1c weaveworks/weave-kube:2.6.0],SizeBytes:114348932,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:5215c4216a65f7e76c1895ba951a12dc1c947904a91810fc66a544ff1d7e87db k8s.gcr.io/kube-scheduler:v1.17.0],SizeBytes:94431763,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:7ec975f167d815311a7136c32e70735f0d00b73781365df1befd46ed35bd4fe7 k8s.gcr.io/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[weaveworks/weave-npc@sha256:985de9ff201677a85ce78703c515466fe45c9c73da6ee21821e89d902c21daf8 weaveworks/weave-npc:2.6.0],SizeBytes:34949961,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0],SizeBytes:4747037,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:1450451,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:742472,},ContainerImage{Names:[kubernetes/pause@sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 kubernetes/pause:latest],SizeBytes:239840,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 25 21:30:42.356: INFO: Logging kubelet events for node jerma-server-mvvl6gufaqub Jan 25 21:30:42.369: INFO: Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub Jan 25 21:30:42.410: INFO: coredns-6955765f44-bhnn4 started at 2020-01-04 11:48:47 +0000 UTC (0+1 container statuses recorded) Jan 25 21:30:42.410: INFO: Container coredns ready: true, restart count 0 Jan 25 21:30:42.410: INFO: coredns-6955765f44-bwd85 started at 2020-01-04 11:48:47 +0000 UTC (0+1 container statuses recorded) Jan 25 21:30:42.410: INFO: Container coredns ready: true, restart count 0 Jan 25 21:30:42.410: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub started at 2020-01-04 11:47:53 +0000 UTC (0+1 container statuses recorded) Jan 25 21:30:42.410: INFO: Container kube-controller-manager ready: true, restart count 3 Jan 25 21:30:42.410: INFO: kube-proxy-chkps started at 2020-01-04 11:48:11 +0000 UTC (0+1 container statuses recorded) Jan 25 21:30:42.410: INFO: Container kube-proxy ready: true, restart count 0 Jan 25 21:30:42.410: INFO: weave-net-z6tjf started at 2020-01-04 11:48:11 +0000 UTC (0+2 container statuses recorded) Jan 25 21:30:42.411: INFO: Container weave ready: true, restart count 0 Jan 25 21:30:42.411: INFO: Container weave-npc ready: true, restart count 0 Jan 25 21:30:42.411: INFO: frontend-6c5f89d5d4-rkgl4 started at 2020-01-25 21:27:10 +0000 UTC (0+1 container statuses recorded) Jan 25 21:30:42.411: INFO: Container guestbook-frontend ready: true, restart count 0 Jan 25 21:30:42.411: INFO: kube-scheduler-jerma-server-mvvl6gufaqub started at 2020-01-04 11:47:54 +0000 UTC (0+1 container statuses recorded) Jan 25 21:30:42.411: INFO: Container kube-scheduler ready: true, restart count 3 Jan 25 21:30:42.411: INFO: agnhost-slave-774cfc759f-2nkm7 started at 2020-01-25 21:27:13 +0000 UTC (0+1 container statuses recorded) Jan 25 21:30:42.411: INFO: Container slave ready: true, restart count 0 Jan 25 21:30:42.411: INFO: kube-apiserver-jerma-server-mvvl6gufaqub started at 2020-01-04 11:47:53 +0000 UTC (0+1 container statuses recorded) Jan 25 21:30:42.411: INFO: Container kube-apiserver ready: true, restart count 1 Jan 25 21:30:42.411: INFO: etcd-jerma-server-mvvl6gufaqub started at 2020-01-04 11:47:54 +0000 UTC (0+1 container statuses recorded) Jan 25 21:30:42.411: INFO: Container etcd ready: true, restart count 1 W0125 21:30:42.418208 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 25 21:30:42.469: INFO: Latency metrics for node jerma-server-mvvl6gufaqub Jan 25 21:30:42.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8804" for this suite. • Failure [214.168 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:385 should create and stop a working application [Conformance] [It] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 25 21:30:38.174: Cannot added new entry in 180 seconds. /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:2315 ------------------------------ {"msg":"FAILED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":278,"completed":40,"skipped":552,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 25 21:30:42.481: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Jan 25 21:30:43.866: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 25 21:31:12.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4499" for this suite. • [SLOW TEST:29.933 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":278,"completed":41,"skipped":578,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 25 21:31:12.415: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0125 21:31:23.042638 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 25 21:31:23.042: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 25 21:31:23.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8887" for this suite. • [SLOW TEST:10.642 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":278,"completed":42,"skipped":597,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 25 21:31:23.058: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 25 21:31:28.783: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 25 21:31:31.177: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715584688, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715584688, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715584688, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715584688, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 25 21:31:33.268: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715584688, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715584688, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715584688, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715584688, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 25 21:31:35.185: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715584688, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715584688, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715584688, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715584688, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 25 21:31:37.193: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715584688, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715584688, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715584688, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715584688, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 25 21:31:39.228: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715584688, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715584688, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715584688, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715584688, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 25 21:31:41.185: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715584688, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715584688, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715584688, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715584688, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 25 21:31:43.183: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715584688, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715584688, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715584688, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715584688, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 25 21:31:45.186: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715584688, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715584688, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715584688, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715584688, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 25 21:31:48.312: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 25 21:32:00.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4219" for this suite. STEP: Destroying namespace "webhook-4219-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:37.786 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":278,"completed":43,"skipped":626,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 25 21:32:00.847: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Jan 25 21:32:12.997: INFO: &Pod{ObjectMeta:{send-events-f15cbdbf-7619-4694-8f3a-a73c4e34d5cf events-51 /api/v1/namespaces/events-51/pods/send-events-f15cbdbf-7619-4694-8f3a-a73c4e34d5cf 6fdb3f1c-201e-431d-9687-29e0ed24f945 4326441 0 2020-01-25 21:32:00 +0000 UTC map[name:foo time:917553898] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-72k8x,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-72k8x,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-72k8x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 21:32:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 21:32:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 21:32:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 21:32:00 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.1,StartTime:2020-01-25 21:32:00 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-25 21:32:10 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:docker://0dba7c5213a17cd7e208a5a3c8938060f837c423685da63474cb1517ba6b3802,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.1,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod Jan 25 21:32:15.005: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Jan 25 21:32:17.012: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 25 21:32:17.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-51" for this suite. • [SLOW TEST:16.199 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":278,"completed":44,"skipped":670,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 25 21:32:17.047: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 25 21:32:17.335: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Jan 25 21:32:20.448: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7328 create -f -' Jan 25 21:32:23.132: INFO: stderr: "" Jan 25 21:32:23.133: INFO: stdout: "e2e-test-crd-publish-openapi-8682-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Jan 25 21:32:23.133: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7328 delete e2e-test-crd-publish-openapi-8682-crds test-cr' Jan 25 21:32:23.354: INFO: stderr: "" Jan 25 21:32:23.355: INFO: stdout: "e2e-test-crd-publish-openapi-8682-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Jan 25 21:32:23.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7328 apply -f -' Jan 25 21:32:23.751: INFO: stderr: "" Jan 25 21:32:23.751: INFO: stdout: "e2e-test-crd-publish-openapi-8682-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Jan 25 21:32:23.751: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7328 delete e2e-test-crd-publish-openapi-8682-crds test-cr' Jan 25 21:32:23.934: INFO: stderr: "" Jan 25 21:32:23.934: INFO: stdout: "e2e-test-crd-publish-openapi-8682-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Jan 25 21:32:23.935: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8682-crds' Jan 25 21:32:24.387: INFO: stderr: "" Jan 25 21:32:24.387: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8682-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 25 21:32:27.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7328" for this suite. • [SLOW TEST:10.165 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":278,"completed":45,"skipped":679,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 25 21:32:27.212: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with configMap that has name projected-configmap-test-upd-1b328608-aee4-4955-b24c-c54b3224a5d2 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-1b328608-aee4-4955-b24c-c54b3224a5d2 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 25 21:33:40.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7664" for this suite. • [SLOW TEST:73.456 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":46,"skipped":700,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 25 21:33:40.669: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test substitution in container's args Jan 25 21:33:40.839: INFO: Waiting up to 5m0s for pod "var-expansion-cd4553c7-55af-4fca-9f43-de70d50d9bf1" in namespace "var-expansion-6913" to be "success or failure" Jan 25 21:33:40.849: INFO: Pod "var-expansion-cd4553c7-55af-4fca-9f43-de70d50d9bf1": Phase="Pending", Reason="", readiness=false. Elapsed: 10.61083ms Jan 25 21:33:42.866: INFO: Pod "var-expansion-cd4553c7-55af-4fca-9f43-de70d50d9bf1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027299062s Jan 25 21:33:44.877: INFO: Pod "var-expansion-cd4553c7-55af-4fca-9f43-de70d50d9bf1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038192386s Jan 25 21:33:46.885: INFO: Pod "var-expansion-cd4553c7-55af-4fca-9f43-de70d50d9bf1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046350919s Jan 25 21:33:48.894: INFO: Pod "var-expansion-cd4553c7-55af-4fca-9f43-de70d50d9bf1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.055121438s STEP: Saw pod success Jan 25 21:33:48.894: INFO: Pod "var-expansion-cd4553c7-55af-4fca-9f43-de70d50d9bf1" satisfied condition "success or failure" Jan 25 21:33:48.902: INFO: Trying to get logs from node jerma-node pod var-expansion-cd4553c7-55af-4fca-9f43-de70d50d9bf1 container dapi-container: STEP: delete the pod Jan 25 21:33:48.949: INFO: Waiting for pod var-expansion-cd4553c7-55af-4fca-9f43-de70d50d9bf1 to disappear Jan 25 21:33:48.988: INFO: Pod var-expansion-cd4553c7-55af-4fca-9f43-de70d50d9bf1 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 25 21:33:48.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-6913" for this suite. • [SLOW TEST:8.339 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":278,"completed":47,"skipped":709,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 25 21:33:49.010: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jan 25 21:33:49.118: INFO: Waiting up to 5m0s for pod "downwardapi-volume-94f84e3d-0730-4751-8f35-e80a1c21bb42" in namespace "downward-api-6861" to be "success or failure" Jan 25 21:33:49.156: INFO: Pod "downwardapi-volume-94f84e3d-0730-4751-8f35-e80a1c21bb42": Phase="Pending", Reason="", readiness=false. Elapsed: 37.217889ms Jan 25 21:33:51.165: INFO: Pod "downwardapi-volume-94f84e3d-0730-4751-8f35-e80a1c21bb42": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046538031s Jan 25 21:33:53.174: INFO: Pod "downwardapi-volume-94f84e3d-0730-4751-8f35-e80a1c21bb42": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055452621s Jan 25 21:33:55.244: INFO: Pod "downwardapi-volume-94f84e3d-0730-4751-8f35-e80a1c21bb42": Phase="Pending", Reason="", readiness=false. Elapsed: 6.125495216s Jan 25 21:33:57.250: INFO: Pod "downwardapi-volume-94f84e3d-0730-4751-8f35-e80a1c21bb42": Phase="Pending", Reason="", readiness=false. Elapsed: 8.131408141s Jan 25 21:33:59.255: INFO: Pod "downwardapi-volume-94f84e3d-0730-4751-8f35-e80a1c21bb42": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.136876911s STEP: Saw pod success Jan 25 21:33:59.256: INFO: Pod "downwardapi-volume-94f84e3d-0730-4751-8f35-e80a1c21bb42" satisfied condition "success or failure" Jan 25 21:33:59.261: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-94f84e3d-0730-4751-8f35-e80a1c21bb42 container client-container: STEP: delete the pod Jan 25 21:33:59.318: INFO: Waiting for pod downwardapi-volume-94f84e3d-0730-4751-8f35-e80a1c21bb42 to disappear Jan 25 21:33:59.325: INFO: Pod downwardapi-volume-94f84e3d-0730-4751-8f35-e80a1c21bb42 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 25 21:33:59.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6861" for this suite. • [SLOW TEST:10.323 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":48,"skipped":723,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} S ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 25 21:33:59.334: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-7647, will wait for the garbage collector to delete the pods Jan 25 21:34:11.528: INFO: Deleting Job.batch foo took: 24.115307ms Jan 25 21:34:11.829: INFO: Terminating Job.batch foo pods took: 300.884662ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 25 21:34:47.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-7647" for this suite. • [SLOW TEST:47.817 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":278,"completed":49,"skipped":724,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 25 21:34:47.153: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Jan 25 21:34:47.343: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1062 /api/v1/namespaces/watch-1062/configmaps/e2e-watch-test-label-changed 8e1356a3-b351-4358-a7f3-a6eba1ccc8de 4326952 0 2020-01-25 21:34:47 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Jan 25 21:34:47.344: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1062 /api/v1/namespaces/watch-1062/configmaps/e2e-watch-test-label-changed 8e1356a3-b351-4358-a7f3-a6eba1ccc8de 4326953 0 2020-01-25 21:34:47 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Jan 25 21:34:47.344: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1062 /api/v1/namespaces/watch-1062/configmaps/e2e-watch-test-label-changed 8e1356a3-b351-4358-a7f3-a6eba1ccc8de 4326954 0 2020-01-25 21:34:47 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Jan 25 21:34:57.502: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1062 /api/v1/namespaces/watch-1062/configmaps/e2e-watch-test-label-changed 8e1356a3-b351-4358-a7f3-a6eba1ccc8de 4326994 0 2020-01-25 21:34:47 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jan 25 21:34:57.502: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1062 /api/v1/namespaces/watch-1062/configmaps/e2e-watch-test-label-changed 8e1356a3-b351-4358-a7f3-a6eba1ccc8de 4326996 0 2020-01-25 21:34:47 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Jan 25 21:34:57.502: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1062 /api/v1/namespaces/watch-1062/configmaps/e2e-watch-test-label-changed 8e1356a3-b351-4358-a7f3-a6eba1ccc8de 4326997 0 2020-01-25 21:34:47 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 25 21:34:57.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-1062" for this suite. • [SLOW TEST:10.396 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":278,"completed":50,"skipped":743,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 25 21:34:57.549: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Jan 25 21:34:57.705: INFO: >>> kubeConfig: /root/.kube/config Jan 25 21:35:01.238: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 25 21:35:13.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2918" for this suite. • [SLOW TEST:15.934 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":278,"completed":51,"skipped":745,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 25 21:35:13.484: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Jan 25 21:35:13.595: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the sample API server. Jan 25 21:35:14.094: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Jan 25 21:35:16.207: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715584914, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715584914, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715584914, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715584914, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 25 21:35:18.214: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715584914, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715584914, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715584914, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715584914, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 25 21:35:20.214: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715584914, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715584914, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715584914, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715584914, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 25 21:35:22.214: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715584914, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715584914, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715584914, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715584914, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 25 21:35:25.142: INFO: Waited 922.779697ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 25 21:35:25.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-4691" for this suite. • [SLOW TEST:12.201 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]","total":278,"completed":52,"skipped":784,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 25 21:35:25.686: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Jan 25 21:35:26.896: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Jan 25 21:35:28.915: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715584926, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715584926, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715584927, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715584926, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 25 21:35:30.921: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715584926, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715584926, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715584927, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715584926, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 25 21:35:32.923: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715584926, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715584926, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715584927, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715584926, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 25 21:35:34.923: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715584926, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715584926, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715584927, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715584926, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 25 21:35:37.957: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 25 21:35:37.964: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 25 21:35:39.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-5869" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:13.989 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":278,"completed":53,"skipped":801,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 25 21:35:39.675: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Jan 25 21:35:39.751: INFO: PodSpec: initContainers in spec.initContainers Jan 25 21:36:46.444: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-d239b5ae-b946-4007-96c0-a4dd42aee5ca", GenerateName:"", Namespace:"init-container-4593", SelfLink:"/api/v1/namespaces/init-container-4593/pods/pod-init-d239b5ae-b946-4007-96c0-a4dd42aee5ca", UID:"a43b154c-a796-4384-9078-2851f08225ef", ResourceVersion:"4327449", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63715584939, loc:(*time.Location)(0x7d100a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"751858967"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-dmw8n", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0064fc6c0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-dmw8n", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-dmw8n", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-dmw8n", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0047ff818), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"jerma-node", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc004171aa0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0047ff900)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0047ff920)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0047ff928), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0047ff92c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715584940, loc:(*time.Location)(0x7d100a0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715584940, loc:(*time.Location)(0x7d100a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715584940, loc:(*time.Location)(0x7d100a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715584939, loc:(*time.Location)(0x7d100a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.2.250", PodIP:"10.44.0.1", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.44.0.1"}}, StartTime:(*v1.Time)(0xc00274c900), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00061d650)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00061d6c0)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://5ce3b48ff01bfeea01dc955267087b448f33edccbe0ce5e73e74f8e9026d3db7", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00274c960), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00274c920), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc0047ff9ff)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 25 21:36:46.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-4593" for this suite. • [SLOW TEST:66.795 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":278,"completed":54,"skipped":805,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 25 21:36:46.473: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs Jan 25 21:36:46.605: INFO: Waiting up to 5m0s for pod "pod-df6b1607-f278-4492-86b6-4291d9cbc54b" in namespace "emptydir-0" to be "success or failure" Jan 25 21:36:46.620: INFO: Pod "pod-df6b1607-f278-4492-86b6-4291d9cbc54b": Phase="Pending", Reason="", readiness=false. Elapsed: 14.969411ms Jan 25 21:36:48.626: INFO: Pod "pod-df6b1607-f278-4492-86b6-4291d9cbc54b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020729809s Jan 25 21:36:50.634: INFO: Pod "pod-df6b1607-f278-4492-86b6-4291d9cbc54b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028973393s Jan 25 21:36:52.645: INFO: Pod "pod-df6b1607-f278-4492-86b6-4291d9cbc54b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039978221s Jan 25 21:36:54.652: INFO: Pod "pod-df6b1607-f278-4492-86b6-4291d9cbc54b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.046667942s Jan 25 21:36:56.663: INFO: Pod "pod-df6b1607-f278-4492-86b6-4291d9cbc54b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.057767985s STEP: Saw pod success Jan 25 21:36:56.663: INFO: Pod "pod-df6b1607-f278-4492-86b6-4291d9cbc54b" satisfied condition "success or failure" Jan 25 21:36:56.668: INFO: Trying to get logs from node jerma-node pod pod-df6b1607-f278-4492-86b6-4291d9cbc54b container test-container: STEP: delete the pod Jan 25 21:36:56.769: INFO: Waiting for pod pod-df6b1607-f278-4492-86b6-4291d9cbc54b to disappear Jan 25 21:36:56.775: INFO: Pod pod-df6b1607-f278-4492-86b6-4291d9cbc54b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 25 21:36:56.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-0" for this suite. • [SLOW TEST:10.384 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":55,"skipped":854,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 25 21:36:56.858: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 25 21:37:06.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-829" for this suite. • [SLOW TEST:9.263 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":278,"completed":56,"skipped":862,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 25 21:37:06.123: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Starting the proxy Jan 25 21:37:06.293: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix778457656/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 25 21:37:06.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2913" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":278,"completed":57,"skipped":884,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 25 21:37:06.406: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jan 25 21:37:06.596: INFO: Number of nodes with available pods: 0 Jan 25 21:37:06.596: INFO: Node jerma-node is running more than one daemon pod Jan 25 21:37:07.613: INFO: Number of nodes with available pods: 0 Jan 25 21:37:07.613: INFO: Node jerma-node is running more than one daemon pod Jan 25 21:37:08.915: INFO: Number of nodes with available pods: 0 Jan 25 21:37:08.915: INFO: Node jerma-node is running more than one daemon pod Jan 25 21:37:09.608: INFO: Number of nodes with available pods: 0 Jan 25 21:37:09.608: INFO: Node jerma-node is running more than one daemon pod Jan 25 21:37:10.620: INFO: Number of nodes with available pods: 0 Jan 25 21:37:10.620: INFO: Node jerma-node is running more than one daemon pod Jan 25 21:37:13.101: INFO: Number of nodes with available pods: 0 Jan 25 21:37:13.101: INFO: Node jerma-node is running more than one daemon pod Jan 25 21:37:13.723: INFO: Number of nodes with available pods: 0 Jan 25 21:37:13.724: INFO: Node jerma-node is running more than one daemon pod Jan 25 21:37:15.312: INFO: Number of nodes with available pods: 0 Jan 25 21:37:15.313: INFO: Node jerma-node is running more than one daemon pod Jan 25 21:37:15.635: INFO: Number of nodes with available pods: 0 Jan 25 21:37:15.635: INFO: Node jerma-node is running more than one daemon pod Jan 25 21:37:16.638: INFO: Number of nodes with available pods: 2 Jan 25 21:37:16.638: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Jan 25 21:37:16.714: INFO: Number of nodes with available pods: 1 Jan 25 21:37:16.715: INFO: Node jerma-node is running more than one daemon pod Jan 25 21:37:17.729: INFO: Number of nodes with available pods: 1 Jan 25 21:37:17.729: INFO: Node jerma-node is running more than one daemon pod Jan 25 21:37:18.728: INFO: Number of nodes with available pods: 1 Jan 25 21:37:18.728: INFO: Node jerma-node is running more than one daemon pod Jan 25 21:37:19.726: INFO: Number of nodes with available pods: 1 Jan 25 21:37:19.726: INFO: Node jerma-node is running more than one daemon pod Jan 25 21:37:20.724: INFO: Number of nodes with available pods: 1 Jan 25 21:37:20.724: INFO: Node jerma-node is running more than one daemon pod Jan 25 21:37:21.728: INFO: Number of nodes with available pods: 1 Jan 25 21:37:21.728: INFO: Node jerma-node is running more than one daemon pod Jan 25 21:37:22.723: INFO: Number of nodes with available pods: 1 Jan 25 21:37:22.723: INFO: Node jerma-node is running more than one daemon pod Jan 25 21:37:23.744: INFO: Number of nodes with available pods: 1 Jan 25 21:37:23.745: INFO: Node jerma-node is running more than one daemon pod Jan 25 21:37:24.725: INFO: Number of nodes with available pods: 1 Jan 25 21:37:24.725: INFO: Node jerma-node is running more than one daemon pod Jan 25 21:37:25.726: INFO: Number of nodes with available pods: 1 Jan 25 21:37:25.727: INFO: Node jerma-node is running more than one daemon pod Jan 25 21:37:26.729: INFO: Number of nodes with available pods: 1 Jan 25 21:37:26.729: INFO: Node jerma-node is running more than one daemon pod Jan 25 21:37:27.729: INFO: Number of nodes with available pods: 2 Jan 25 21:37:27.729: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5958, will wait for the garbage collector to delete the pods Jan 25 21:37:27.807: INFO: Deleting DaemonSet.extensions daemon-set took: 13.845362ms Jan 25 21:37:28.208: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.997759ms Jan 25 21:37:34.812: INFO: Number of nodes with available pods: 0 Jan 25 21:37:34.813: INFO: Number of running nodes: 0, number of available pods: 0 Jan 25 21:37:34.820: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5958/daemonsets","resourceVersion":"4327690"},"items":null} Jan 25 21:37:34.826: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5958/pods","resourceVersion":"4327690"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 25 21:37:34.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5958" for this suite. • [SLOW TEST:28.445 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":278,"completed":58,"skipped":896,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 25 21:37:34.852: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 25 21:37:34.978: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-5d8d6688-137c-4f22-bc58-bfd854d95eee" in namespace "security-context-test-45" to be "success or failure" Jan 25 21:37:34.999: INFO: Pod "busybox-readonly-false-5d8d6688-137c-4f22-bc58-bfd854d95eee": Phase="Pending", Reason="", readiness=false. Elapsed: 20.407579ms Jan 25 21:37:37.007: INFO: Pod "busybox-readonly-false-5d8d6688-137c-4f22-bc58-bfd854d95eee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029188459s Jan 25 21:37:39.012: INFO: Pod "busybox-readonly-false-5d8d6688-137c-4f22-bc58-bfd854d95eee": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034166757s Jan 25 21:37:41.024: INFO: Pod "busybox-readonly-false-5d8d6688-137c-4f22-bc58-bfd854d95eee": Phase="Pending", Reason="", readiness=false. Elapsed: 6.045703163s Jan 25 21:37:43.031: INFO: Pod "busybox-readonly-false-5d8d6688-137c-4f22-bc58-bfd854d95eee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.052573902s Jan 25 21:37:43.031: INFO: Pod "busybox-readonly-false-5d8d6688-137c-4f22-bc58-bfd854d95eee" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 25 21:37:43.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-45" for this suite. • [SLOW TEST:8.211 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 When creating a pod with readOnlyRootFilesystem /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:164 should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":278,"completed":59,"skipped":905,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 25 21:37:43.064: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5313.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-5313.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5313.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-5313.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5313.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-5313.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5313.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-5313.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5313.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-5313.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5313.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-5313.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5313.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 114.252.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.252.114_udp@PTR;check="$$(dig +tcp +noall +answer +search 114.252.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.252.114_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5313.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-5313.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5313.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-5313.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5313.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-5313.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5313.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-5313.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5313.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-5313.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5313.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-5313.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5313.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 114.252.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.252.114_udp@PTR;check="$$(dig +tcp +noall +answer +search 114.252.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.252.114_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 25 21:37:55.816: INFO: Unable to read wheezy_udp@dns-test-service.dns-5313.svc.cluster.local from pod dns-5313/dns-test-68b63c68-80ff-4a9e-b90d-a0811fd79f45: the server could not find the requested resource (get pods dns-test-68b63c68-80ff-4a9e-b90d-a0811fd79f45) Jan 25 21:37:55.832: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5313.svc.cluster.local from pod dns-5313/dns-test-68b63c68-80ff-4a9e-b90d-a0811fd79f45: the server could not find the requested resource (get pods dns-test-68b63c68-80ff-4a9e-b90d-a0811fd79f45) Jan 25 21:37:55.839: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5313.svc.cluster.local from pod dns-5313/dns-test-68b63c68-80ff-4a9e-b90d-a0811fd79f45: the server could not find the requested resource (get pods dns-test-68b63c68-80ff-4a9e-b90d-a0811fd79f45) Jan 25 21:37:55.844: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5313.svc.cluster.local from pod dns-5313/dns-test-68b63c68-80ff-4a9e-b90d-a0811fd79f45: the server could not find the requested resource (get pods dns-test-68b63c68-80ff-4a9e-b90d-a0811fd79f45) Jan 25 21:37:55.889: INFO: Unable to read jessie_udp@dns-test-service.dns-5313.svc.cluster.local from pod dns-5313/dns-test-68b63c68-80ff-4a9e-b90d-a0811fd79f45: the server could not find the requested resource (get pods dns-test-68b63c68-80ff-4a9e-b90d-a0811fd79f45) Jan 25 21:37:55.894: INFO: Unable to read jessie_tcp@dns-test-service.dns-5313.svc.cluster.local from pod dns-5313/dns-test-68b63c68-80ff-4a9e-b90d-a0811fd79f45: the server could not find the requested resource (get pods dns-test-68b63c68-80ff-4a9e-b90d-a0811fd79f45) Jan 25 21:37:55.900: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5313.svc.cluster.local from pod dns-5313/dns-test-68b63c68-80ff-4a9e-b90d-a0811fd79f45: the server could not find the requested resource (get pods dns-test-68b63c68-80ff-4a9e-b90d-a0811fd79f45) Jan 25 21:37:55.907: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5313.svc.cluster.local from pod dns-5313/dns-test-68b63c68-80ff-4a9e-b90d-a0811fd79f45: the server could not find the requested resource (get pods dns-test-68b63c68-80ff-4a9e-b90d-a0811fd79f45) Jan 25 21:37:55.955: INFO: Lookups using dns-5313/dns-test-68b63c68-80ff-4a9e-b90d-a0811fd79f45 failed for: [wheezy_udp@dns-test-service.dns-5313.svc.cluster.local wheezy_tcp@dns-test-service.dns-5313.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5313.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5313.svc.cluster.local jessie_udp@dns-test-service.dns-5313.svc.cluster.local jessie_tcp@dns-test-service.dns-5313.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5313.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5313.svc.cluster.local] Jan 25 21:38:00.965: INFO: Unable to read wheezy_udp@dns-test-service.dns-5313.svc.cluster.local from pod dns-5313/dns-test-68b63c68-80ff-4a9e-b90d-a0811fd79f45: the server could not find the requested resource (get pods dns-test-68b63c68-80ff-4a9e-b90d-a0811fd79f45) Jan 25 21:38:00.971: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5313.svc.cluster.local from pod dns-5313/dns-test-68b63c68-80ff-4a9e-b90d-a0811fd79f45: the server could not find the requested resource (get pods dns-test-68b63c68-80ff-4a9e-b90d-a0811fd79f45) Jan 25 21:38:00.980: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5313.svc.cluster.local from pod dns-5313/dns-test-68b63c68-80ff-4a9e-b90d-a0811fd79f45: the server could not find the requested resource (get pods dns-test-68b63c68-80ff-4a9e-b90d-a0811fd79f45) Jan 25 21:38:00.985: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5313.svc.cluster.local from pod dns-5313/dns-test-68b63c68-80ff-4a9e-b90d-a0811fd79f45: the server could not find the requested resource (get pods dns-test-68b63c68-80ff-4a9e-b90d-a0811fd79f45) Jan 25 21:38:01.015: INFO: Unable to read jessie_udp@dns-test-service.dns-5313.svc.cluster.local from pod dns-5313/dns-test-68b63c68-80ff-4a9e-b90d-a0811fd79f45: the server could not find the requested resource (get pods dns-test-68b63c68-80ff-4a9e-b90d-a0811fd79f45) Jan 25 21:38:01.019: INFO: Unable to read jessie_tcp@dns-test-service.dns-5313.svc.cluster.local from pod dns-5313/dns-test-68b63c68-80ff-4a9e-b90d-a0811fd79f45: the server could not find the requested resource (get pods dns-test-68b63c68-80ff-4a9e-b90d-a0811fd79f45) Jan 25 21:38:01.023: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5313.svc.cluster.local from pod dns-5313/dns-test-68b63c68-80ff-4a9e-b90d-a0811fd79f45: the server could not find the requested resource (get pods dns-test-68b63c68-80ff-4a9e-b90d-a0811fd79f45) Jan 25 21:38:01.027: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5313.svc.cluster.local from pod dns-5313/dns-test-68b63c68-80ff-4a9e-b90d-a0811fd79f45: the server could not find the requested resource (get pods dns-test-68b63c68-80ff-4a9e-b90d-a0811fd79f45) Jan 25 21:38:01.088: INFO: Lookups using dns-5313/dns-test-68b63c68-80ff-4a9e-b90d-a0811fd79f45 failed for: [wheezy_udp@dns-test-service.dns-5313.svc.cluster.local wheezy_tcp@dns-test-service.dns-5313.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5313.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5313.svc.cluster.local jessie_udp@dns-test-service.dns-5313.svc.cluster.local jessie_tcp@dns-test-service.dns-5313.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5313.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5313.svc.cluster.local] Jan 25 21:38:05.967: INFO: Unable to read wheezy_udp@dns-test-service.dns-5313.svc.cluster.local from pod dns-5313/dns-test-68b63c68-80ff-4a9e-b90d-a0811fd79f45: the server could not find the requested resource (get pods dns-test-68b63c68-80ff-4a9e-b90d-a0811fd79f45) Jan 25 21:38:05.974: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5313.svc.cluster.local from pod dns-5313/dns-test-68b63c68-80ff-4a9e-b90d-a0811fd79f45: the server could not find the requested resource (get pods dns-test-68b63c68-80ff-4a9e-b90d-a0811fd79f45) Jan 25 21:38:05.982: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5313.svc.cluster.local from pod dns-5313/dns-test-68b63c68-80ff-4a9e-b90d-a0811fd79f45: the server could not find the requested resource (get pods dns-test-68b63c68-80ff-4a9e-b90d-a0811fd79f45) Jan 25 21:38:05.991: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5313.svc.cluster.local from pod dns-5313/dns-test-68b63c68-80ff-4a9e-b90d-a0811fd79f45: the server could not find the requested resource (get pods dns-test-68b63c68-80ff-4a9e-b90d-a0811fd79f45) Jan 25 21:38:06.052: INFO: Unable to read jessie_udp@dns-test-service.dns-5313.svc.cluster.local from pod dns-5313/dns-test-68b63c68-80ff-4a9e-b90d-a0811fd79f45: the server could not find the requested resource (get pods dns-test-68b63c68-80ff-4a9e-b90d-a0811fd79f45) Jan 25 21:38:06.056: INFO: Unable to read jessie_tcp@dns-test-service.dns-5313.svc.cluster.local from pod dns-5313/dns-test-68b63c68-80ff-4a9e-b90d-a0811fd79f45: the server could not find the requested resource (get pods dns-test-68b63c68-80ff-4a9e-b90d-a0811fd79f45) Jan 25 21:38:06.061: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5313.svc.cluster.local from pod dns-5313/dns-test-68b63c68-80ff-4a9e-b90d-a0811fd79f45: the server could not find the requested resource (get pods dns-test-68b63c68-80ff-4a9e-b90d-a0811fd79f45) Jan 25 21:38:06.064: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5313.svc.cluster.local from pod dns-5313/dns-test-68b63c68-80ff-4a9e-b90d-a0811fd79f45: the server could not find the requested resource (get pods dns-test-68b63c68-80ff-4a9e-b90d-a0811fd79f45) Jan 25 21:38:06.089: INFO: Lookups using dns-5313/dns-test-68b63c68-80ff-4a9e-b90d-a0811fd79f45 failed for: [wheezy_udp@dns-test-service.dns-5313.svc.cluster.local wheezy_tcp@dns-test-service.dns-5313.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5313.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5313.svc.cluster.local jessie_udp@dns-test-service.dns-5313.svc.cluster.local jessie_tcp@dns-test-service.dns-5313.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5313.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5313.svc.cluster.local] Jan 25 21:38:11.865: INFO: Unable to read wheezy_udp@dns-test-service.dns-5313.svc.cluster.local from pod dns-5313/dns-test-68b63c68-80ff-4a9e-b90d-a0811fd79f45: the server could not find the requested resource (get pods dns-test-68b63c68-80ff-4a9e-b90d-a0811fd79f45) Jan 25 21:38:11.876: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5313.svc.cluster.local from pod dns-5313/dns-test-68b63c68-80ff-4a9e-b90d-a0811fd79f45: the server could not find the requested resource (get pods dns-test-68b63c68-80ff-4a9e-b90d-a0811fd79f45) Jan 25 21:38:11.886: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5313.svc.cluster.local from pod dns-5313/dns-test-68b63c68-80ff-4a9e-b90d-a0811fd79f45: the server could not find the requested resource (get pods dns-test-68b63c68-80ff-4a9e-b90d-a0811fd79f45) Jan 25 21:38:11.898: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5313.svc.cluster.local from pod dns-5313/dns-test-68b63c68-80ff-4a9e-b90d-a0811fd79f45: the server could not find the requested resource (get pods dns-test-68b63c68-80ff-4a9e-b90d-a0811fd79f45) Jan 25 21:38:11.963: INFO: Unable to read jessie_udp@dns-test-service.dns-5313.svc.cluster.local from pod dns-5313/dns-test-68b63c68-80ff-4a9e-b90d-a0811fd79f45: the server could not find the requested resource (get pods dns-test-68b63c68-80ff-4a9e-b90d-a0811fd79f45) Jan 25 21:38:11.967: INFO: Unable to read jessie_tcp@dns-test-service.dns-5313.svc.cluster.local from pod dns-5313/dns-test-68b63c68-80ff-4a9e-b90d-a0811fd79f45: the server could not find the requested resource (get pods dns-test-68b63c68-80ff-4a9e-b90d-a0811fd79f45) Jan 25 21:38:11.972: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5313.svc.cluster.local from pod dns-5313/dns-test-68b63c68-80ff-4a9e-b90d-a0811fd79f45: the server could not find the requested resource (get pods dns-test-68b63c68-80ff-4a9e-b90d-a0811fd79f45) Jan 25 21:38:11.977: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5313.svc.cluster.local from pod dns-5313/dns-test-68b63c68-80ff-4a9e-b90d-a0811fd79f45: the server could not find the requested resource (get pods dns-test-68b63c68-80ff-4a9e-b90d-a0811fd79f45) Jan 25 21:38:11.999: INFO: Lookups using dns-5313/dns-test-68b63c68-80ff-4a9e-b90d-a0811fd79f45 failed for: [wheezy_udp@dns-test-service.dns-5313.svc.cluster.local wheezy_tcp@dns-test-service.dns-5313.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5313.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5313.svc.cluster.local jessie_udp@dns-test-service.dns-5313.svc.cluster.local jessie_tcp@dns-test-service.dns-5313.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5313.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5313.svc.cluster.local] Jan 25 21:38:15.963: INFO: Unable to read wheezy_udp@dns-test-service.dns-5313.svc.cluster.local from pod dns-5313/dns-test-68b63c68-80ff-4a9e-b90d-a0811fd79f45: the server could not find the requested resource (get pods dns-test-68b63c68-80ff-4a9e-b90d-a0811fd79f45) Jan 25 21:38:15.968: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5313.svc.cluster.local from pod dns-5313/dns-test-68b63c68-80ff-4a9e-b90d-a0811fd79f45: the server could not find the requested resource (get pods dns-test-68b63c68-80ff-4a9e-b90d-a0811fd79f45) Jan 25 21:38:15.972: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5313.svc.cluster.local from pod dns-5313/dns-test-68b63c68-80ff-4a9e-b90d-a0811fd79f45: the server could not find the requested resource (get pods dns-test-68b63c68-80ff-4a9e-b90d-a0811fd79f45) Jan 25 21:38:15.976: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5313.svc.cluster.local from pod dns-5313/dns-test-68b63c68-80ff-4a9e-b90d-a0811fd79f45: the server could not find the requested resource (get pods dns-test-68b63c68-80ff-4a9e-b90d-a0811fd79f45) Jan 25 21:38:16.013: INFO: Unable to read jessie_udp@dns-test-service.dns-5313.svc.cluster.local from pod dns-5313/dns-test-68b63c68-80ff-4a9e-b90d-a0811fd79f45: the server could not find the requested resource (get pods dns-test-68b63c68-80ff-4a9e-b90d-a0811fd79f45) Jan 25 21:38:16.017: INFO: Unable to read jessie_tcp@dns-test-service.dns-5313.svc.cluster.local from pod dns-5313/dns-test-68b63c68-80ff-4a9e-b90d-a0811fd79f45: the server could not find the requested resource (get pods dns-test-68b63c68-80ff-4a9e-b90d-a0811fd79f45) Jan 25 21:38:16.020: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5313.svc.cluster.local from pod dns-5313/dns-test-68b63c68-80ff-4a9e-b90d-a0811fd79f45: the server could not find the requested resource (get pods dns-test-68b63c68-80ff-4a9e-b90d-a0811fd79f45) Jan 25 21:38:16.023: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5313.svc.cluster.local from pod dns-5313/dns-test-68b63c68-80ff-4a9e-b90d-a0811fd79f45: the server could not find the requested resource (get pods dns-test-68b63c68-80ff-4a9e-b90d-a0811fd79f45) Jan 25 21:38:16.043: INFO: Lookups using dns-5313/dns-test-68b63c68-80ff-4a9e-b90d-a0811fd79f45 failed for: [wheezy_udp@dns-test-service.dns-5313.svc.cluster.local wheezy_tcp@dns-test-service.dns-5313.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5313.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5313.svc.cluster.local jessie_udp@dns-test-service.dns-5313.svc.cluster.local jessie_tcp@dns-test-service.dns-5313.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5313.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5313.svc.cluster.local] Jan 25 21:38:20.963: INFO: Unable to read wheezy_udp@dns-test-service.dns-5313.svc.cluster.local from pod dns-5313/dns-test-68b63c68-80ff-4a9e-b90d-a0811fd79f45: the server could not find the requested resource (get pods dns-test-68b63c68-80ff-4a9e-b90d-a0811fd79f45) Jan 25 21:38:20.968: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5313.svc.cluster.local from pod dns-5313/dns-test-68b63c68-80ff-4a9e-b90d-a0811fd79f45: the server could not find the requested resource (get pods dns-test-68b63c68-80ff-4a9e-b90d-a0811fd79f45) Jan 25 21:38:20.972: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5313.svc.cluster.local from pod dns-5313/dns-test-68b63c68-80ff-4a9e-b90d-a0811fd79f45: the server could not find the requested resource (get pods dns-test-68b63c68-80ff-4a9e-b90d-a0811fd79f45) Jan 25 21:38:20.994: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5313.svc.cluster.local from pod dns-5313/dns-test-68b63c68-80ff-4a9e-b90d-a0811fd79f45: the server could not find the requested resource (get pods dns-test-68b63c68-80ff-4a9e-b90d-a0811fd79f45) Jan 25 21:38:21.023: INFO: Unable to read jessie_udp@dns-test-service.dns-5313.svc.cluster.local from pod dns-5313/dns-test-68b63c68-80ff-4a9e-b90d-a0811fd79f45: the server could not find the requested resource (get pods dns-test-68b63c68-80ff-4a9e-b90d-a0811fd79f45) Jan 25 21:38:21.026: INFO: Unable to read jessie_tcp@dns-test-service.dns-5313.svc.cluster.local from pod dns-5313/dns-test-68b63c68-80ff-4a9e-b90d-a0811fd79f45: the server could not find the requested resource (get pods dns-test-68b63c68-80ff-4a9e-b90d-a0811fd79f45) Jan 25 21:38:21.029: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5313.svc.cluster.local from pod dns-5313/dns-test-68b63c68-80ff-4a9e-b90d-a0811fd79f45: the server could not find the requested resource (get pods dns-test-68b63c68-80ff-4a9e-b90d-a0811fd79f45) Jan 25 21:38:21.033: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5313.svc.cluster.local from pod dns-5313/dns-test-68b63c68-80ff-4a9e-b90d-a0811fd79f45: the server could not find the requested resource (get pods dns-test-68b63c68-80ff-4a9e-b90d-a0811fd79f45) Jan 25 21:38:21.072: INFO: Lookups using dns-5313/dns-test-68b63c68-80ff-4a9e-b90d-a0811fd79f45 failed for: [wheezy_udp@dns-test-service.dns-5313.svc.cluster.local wheezy_tcp@dns-test-service.dns-5313.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5313.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5313.svc.cluster.local jessie_udp@dns-test-service.dns-5313.svc.cluster.local jessie_tcp@dns-test-service.dns-5313.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5313.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5313.svc.cluster.local] Jan 25 21:38:26.036: INFO: DNS probes using dns-5313/dns-test-68b63c68-80ff-4a9e-b90d-a0811fd79f45 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 25 21:38:26.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5313" for this suite. • [SLOW TEST:43.419 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":278,"completed":60,"skipped":910,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 25 21:38:26.485: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-4e8be298-a305-492c-beee-82745a18eff6 STEP: Creating a pod to test consume secrets Jan 25 21:38:26.637: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-7456dc23-f90f-42db-955b-3ffc73001404" in namespace "projected-4788" to be "success or failure" Jan 25 21:38:26.704: INFO: Pod "pod-projected-secrets-7456dc23-f90f-42db-955b-3ffc73001404": Phase="Pending", Reason="", readiness=false. Elapsed: 66.004853ms Jan 25 21:38:28.713: INFO: Pod "pod-projected-secrets-7456dc23-f90f-42db-955b-3ffc73001404": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075437104s Jan 25 21:38:30.720: INFO: Pod "pod-projected-secrets-7456dc23-f90f-42db-955b-3ffc73001404": Phase="Pending", Reason="", readiness=false. Elapsed: 4.082791699s Jan 25 21:38:32.728: INFO: Pod "pod-projected-secrets-7456dc23-f90f-42db-955b-3ffc73001404": Phase="Pending", Reason="", readiness=false. Elapsed: 6.090473063s Jan 25 21:38:34.734: INFO: Pod "pod-projected-secrets-7456dc23-f90f-42db-955b-3ffc73001404": Phase="Pending", Reason="", readiness=false. Elapsed: 8.096617056s Jan 25 21:38:36.743: INFO: Pod "pod-projected-secrets-7456dc23-f90f-42db-955b-3ffc73001404": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.105570323s STEP: Saw pod success Jan 25 21:38:36.744: INFO: Pod "pod-projected-secrets-7456dc23-f90f-42db-955b-3ffc73001404" satisfied condition "success or failure" Jan 25 21:38:36.749: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-7456dc23-f90f-42db-955b-3ffc73001404 container projected-secret-volume-test: STEP: delete the pod Jan 25 21:38:36.834: INFO: Waiting for pod pod-projected-secrets-7456dc23-f90f-42db-955b-3ffc73001404 to disappear Jan 25 21:38:36.878: INFO: Pod pod-projected-secrets-7456dc23-f90f-42db-955b-3ffc73001404 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 25 21:38:36.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4788" for this suite. • [SLOW TEST:10.406 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":61,"skipped":976,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 25 21:38:36.891: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Jan 25 21:38:37.030: INFO: Waiting up to 5m0s for pod "downward-api-e24a40b8-ddb1-41be-b4ee-3b05c6cd3489" in namespace "downward-api-7100" to be "success or failure" Jan 25 21:38:37.036: INFO: Pod "downward-api-e24a40b8-ddb1-41be-b4ee-3b05c6cd3489": Phase="Pending", Reason="", readiness=false. Elapsed: 5.631108ms Jan 25 21:38:39.058: INFO: Pod "downward-api-e24a40b8-ddb1-41be-b4ee-3b05c6cd3489": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026952124s Jan 25 21:38:41.064: INFO: Pod "downward-api-e24a40b8-ddb1-41be-b4ee-3b05c6cd3489": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033365409s Jan 25 21:38:43.072: INFO: Pod "downward-api-e24a40b8-ddb1-41be-b4ee-3b05c6cd3489": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041019822s Jan 25 21:38:45.078: INFO: Pod "downward-api-e24a40b8-ddb1-41be-b4ee-3b05c6cd3489": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.04732409s STEP: Saw pod success Jan 25 21:38:45.078: INFO: Pod "downward-api-e24a40b8-ddb1-41be-b4ee-3b05c6cd3489" satisfied condition "success or failure" Jan 25 21:38:45.081: INFO: Trying to get logs from node jerma-node pod downward-api-e24a40b8-ddb1-41be-b4ee-3b05c6cd3489 container dapi-container: STEP: delete the pod Jan 25 21:38:45.122: INFO: Waiting for pod downward-api-e24a40b8-ddb1-41be-b4ee-3b05c6cd3489 to disappear Jan 25 21:38:45.137: INFO: Pod downward-api-e24a40b8-ddb1-41be-b4ee-3b05c6cd3489 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 25 21:38:45.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7100" for this suite. • [SLOW TEST:8.314 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":278,"completed":62,"skipped":983,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 25 21:38:45.206: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1841 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Jan 25 21:38:45.297: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-4455' Jan 25 21:38:45.564: INFO: stderr: "" Jan 25 21:38:45.565: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1846 Jan 25 21:38:45.569: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-4455' Jan 25 21:38:49.608: INFO: stderr: "" Jan 25 21:38:49.608: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 25 21:38:49.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4455" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":278,"completed":63,"skipped":983,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 25 21:38:49.640: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-8674 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a new StatefulSet Jan 25 21:38:49.732: INFO: Found 0 stateful pods, waiting for 3 Jan 25 21:38:59.970: INFO: Found 2 stateful pods, waiting for 3 Jan 25 21:39:09.742: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 25 21:39:09.742: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 25 21:39:09.742: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jan 25 21:39:19.741: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 25 21:39:19.741: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 25 21:39:19.741: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Jan 25 21:39:19.754: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8674 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 25 21:39:20.183: INFO: stderr: "I0125 21:39:19.974823 725 log.go:172] (0xc0008aae70) (0xc0005fff40) Create stream\nI0125 21:39:19.975072 725 log.go:172] (0xc0008aae70) (0xc0005fff40) Stream added, broadcasting: 1\nI0125 21:39:19.979923 725 log.go:172] (0xc0008aae70) Reply frame received for 1\nI0125 21:39:19.980026 725 log.go:172] (0xc0008aae70) (0xc0007cc000) Create stream\nI0125 21:39:19.980036 725 log.go:172] (0xc0008aae70) (0xc0007cc000) Stream added, broadcasting: 3\nI0125 21:39:19.981415 725 log.go:172] (0xc0008aae70) Reply frame received for 3\nI0125 21:39:19.981523 725 log.go:172] (0xc0008aae70) (0xc0007cc0a0) Create stream\nI0125 21:39:19.981541 725 log.go:172] (0xc0008aae70) (0xc0007cc0a0) Stream added, broadcasting: 5\nI0125 21:39:19.982597 725 log.go:172] (0xc0008aae70) Reply frame received for 5\nI0125 21:39:20.066182 725 log.go:172] (0xc0008aae70) Data frame received for 5\nI0125 21:39:20.066251 725 log.go:172] (0xc0007cc0a0) (5) Data frame handling\nI0125 21:39:20.066273 725 log.go:172] (0xc0007cc0a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0125 21:39:20.105866 725 log.go:172] (0xc0008aae70) Data frame received for 3\nI0125 21:39:20.105894 725 log.go:172] (0xc0007cc000) (3) Data frame handling\nI0125 21:39:20.105909 725 log.go:172] (0xc0007cc000) (3) Data frame sent\nI0125 21:39:20.175519 725 log.go:172] (0xc0008aae70) Data frame received for 1\nI0125 21:39:20.175626 725 log.go:172] (0xc0008aae70) (0xc0007cc0a0) Stream removed, broadcasting: 5\nI0125 21:39:20.175714 725 log.go:172] (0xc0005fff40) (1) Data frame handling\nI0125 21:39:20.175733 725 log.go:172] (0xc0005fff40) (1) Data frame sent\nI0125 21:39:20.175757 725 log.go:172] (0xc0008aae70) (0xc0007cc000) Stream removed, broadcasting: 3\nI0125 21:39:20.175785 725 log.go:172] (0xc0008aae70) (0xc0005fff40) Stream removed, broadcasting: 1\nI0125 21:39:20.175812 725 log.go:172] (0xc0008aae70) Go away received\nI0125 21:39:20.176831 725 log.go:172] (0xc0008aae70) (0xc0005fff40) Stream removed, broadcasting: 1\nI0125 21:39:20.176869 725 log.go:172] (0xc0008aae70) (0xc0007cc000) Stream removed, broadcasting: 3\nI0125 21:39:20.176878 725 log.go:172] (0xc0008aae70) (0xc0007cc0a0) Stream removed, broadcasting: 5\n" Jan 25 21:39:20.183: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 25 21:39:20.183: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Jan 25 21:39:30.226: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Jan 25 21:39:40.275: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8674 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 25 21:39:40.757: INFO: stderr: "I0125 21:39:40.521498 742 log.go:172] (0xc00066e9a0) (0xc000662000) Create stream\nI0125 21:39:40.522527 742 log.go:172] (0xc00066e9a0) (0xc000662000) Stream added, broadcasting: 1\nI0125 21:39:40.528378 742 log.go:172] (0xc00066e9a0) Reply frame received for 1\nI0125 21:39:40.528608 742 log.go:172] (0xc00066e9a0) (0xc000662140) Create stream\nI0125 21:39:40.528657 742 log.go:172] (0xc00066e9a0) (0xc000662140) Stream added, broadcasting: 3\nI0125 21:39:40.530464 742 log.go:172] (0xc00066e9a0) Reply frame received for 3\nI0125 21:39:40.530603 742 log.go:172] (0xc00066e9a0) (0xc00067dae0) Create stream\nI0125 21:39:40.530625 742 log.go:172] (0xc00066e9a0) (0xc00067dae0) Stream added, broadcasting: 5\nI0125 21:39:40.532829 742 log.go:172] (0xc00066e9a0) Reply frame received for 5\nI0125 21:39:40.641531 742 log.go:172] (0xc00066e9a0) Data frame received for 3\nI0125 21:39:40.641753 742 log.go:172] (0xc000662140) (3) Data frame handling\nI0125 21:39:40.641791 742 log.go:172] (0xc000662140) (3) Data frame sent\nI0125 21:39:40.641873 742 log.go:172] (0xc00066e9a0) Data frame received for 5\nI0125 21:39:40.641884 742 log.go:172] (0xc00067dae0) (5) Data frame handling\nI0125 21:39:40.641898 742 log.go:172] (0xc00067dae0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0125 21:39:40.743763 742 log.go:172] (0xc00066e9a0) (0xc00067dae0) Stream removed, broadcasting: 5\nI0125 21:39:40.744038 742 log.go:172] (0xc00066e9a0) Data frame received for 1\nI0125 21:39:40.744120 742 log.go:172] (0xc00066e9a0) (0xc000662140) Stream removed, broadcasting: 3\nI0125 21:39:40.744169 742 log.go:172] (0xc000662000) (1) Data frame handling\nI0125 21:39:40.744204 742 log.go:172] (0xc000662000) (1) Data frame sent\nI0125 21:39:40.744220 742 log.go:172] (0xc00066e9a0) (0xc000662000) Stream removed, broadcasting: 1\nI0125 21:39:40.744248 742 log.go:172] (0xc00066e9a0) Go away received\nI0125 21:39:40.746648 742 log.go:172] (0xc00066e9a0) (0xc000662000) Stream removed, broadcasting: 1\nI0125 21:39:40.746658 742 log.go:172] (0xc00066e9a0) (0xc000662140) Stream removed, broadcasting: 3\nI0125 21:39:40.746662 742 log.go:172] (0xc00066e9a0) (0xc00067dae0) Stream removed, broadcasting: 5\n" Jan 25 21:39:40.757: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 25 21:39:40.757: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 25 21:39:50.794: INFO: Waiting for StatefulSet statefulset-8674/ss2 to complete update Jan 25 21:39:50.794: INFO: Waiting for Pod statefulset-8674/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 25 21:39:50.794: INFO: Waiting for Pod statefulset-8674/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 25 21:39:50.794: INFO: Waiting for Pod statefulset-8674/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 25 21:40:01.237: INFO: Waiting for StatefulSet statefulset-8674/ss2 to complete update Jan 25 21:40:01.237: INFO: Waiting for Pod statefulset-8674/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 25 21:40:01.237: INFO: Waiting for Pod statefulset-8674/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 25 21:40:10.834: INFO: Waiting for StatefulSet statefulset-8674/ss2 to complete update Jan 25 21:40:10.834: INFO: Waiting for Pod statefulset-8674/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 25 21:40:20.832: INFO: Waiting for StatefulSet statefulset-8674/ss2 to complete update Jan 25 21:40:20.833: INFO: Waiting for Pod statefulset-8674/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Rolling back to a previous revision Jan 25 21:40:30.830: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8674 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 25 21:40:31.399: INFO: stderr: "I0125 21:40:31.197550 763 log.go:172] (0xc000bdd3f0) (0xc000ce0280) Create stream\nI0125 21:40:31.198053 763 log.go:172] (0xc000bdd3f0) (0xc000ce0280) Stream added, broadcasting: 1\nI0125 21:40:31.203101 763 log.go:172] (0xc000bdd3f0) Reply frame received for 1\nI0125 21:40:31.203158 763 log.go:172] (0xc000bdd3f0) (0xc000a04320) Create stream\nI0125 21:40:31.203183 763 log.go:172] (0xc000bdd3f0) (0xc000a04320) Stream added, broadcasting: 3\nI0125 21:40:31.204993 763 log.go:172] (0xc000bdd3f0) Reply frame received for 3\nI0125 21:40:31.205019 763 log.go:172] (0xc000bdd3f0) (0xc000974140) Create stream\nI0125 21:40:31.205030 763 log.go:172] (0xc000bdd3f0) (0xc000974140) Stream added, broadcasting: 5\nI0125 21:40:31.206223 763 log.go:172] (0xc000bdd3f0) Reply frame received for 5\nI0125 21:40:31.284302 763 log.go:172] (0xc000bdd3f0) Data frame received for 5\nI0125 21:40:31.284372 763 log.go:172] (0xc000974140) (5) Data frame handling\nI0125 21:40:31.284397 763 log.go:172] (0xc000974140) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0125 21:40:31.315850 763 log.go:172] (0xc000bdd3f0) Data frame received for 3\nI0125 21:40:31.315887 763 log.go:172] (0xc000a04320) (3) Data frame handling\nI0125 21:40:31.315911 763 log.go:172] (0xc000a04320) (3) Data frame sent\nI0125 21:40:31.387223 763 log.go:172] (0xc000bdd3f0) Data frame received for 1\nI0125 21:40:31.387283 763 log.go:172] (0xc000ce0280) (1) Data frame handling\nI0125 21:40:31.387307 763 log.go:172] (0xc000ce0280) (1) Data frame sent\nI0125 21:40:31.391072 763 log.go:172] (0xc000bdd3f0) (0xc000a04320) Stream removed, broadcasting: 3\nI0125 21:40:31.391151 763 log.go:172] (0xc000bdd3f0) (0xc000ce0280) Stream removed, broadcasting: 1\nI0125 21:40:31.391259 763 log.go:172] (0xc000bdd3f0) (0xc000974140) Stream removed, broadcasting: 5\nI0125 21:40:31.391342 763 log.go:172] (0xc000bdd3f0) Go away received\nI0125 21:40:31.392201 763 log.go:172] (0xc000bdd3f0) (0xc000ce0280) Stream removed, broadcasting: 1\nI0125 21:40:31.392216 763 log.go:172] (0xc000bdd3f0) (0xc000a04320) Stream removed, broadcasting: 3\nI0125 21:40:31.392224 763 log.go:172] (0xc000bdd3f0) (0xc000974140) Stream removed, broadcasting: 5\n" Jan 25 21:40:31.400: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 25 21:40:31.400: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 25 21:40:41.462: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Jan 25 21:40:51.536: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8674 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 25 21:40:51.902: INFO: stderr: "I0125 21:40:51.731478 783 log.go:172] (0xc000a31ad0) (0xc000a36780) Create stream\nI0125 21:40:51.731674 783 log.go:172] (0xc000a31ad0) (0xc000a36780) Stream added, broadcasting: 1\nI0125 21:40:51.736214 783 log.go:172] (0xc000a31ad0) Reply frame received for 1\nI0125 21:40:51.736242 783 log.go:172] (0xc000a31ad0) (0xc00067e640) Create stream\nI0125 21:40:51.736250 783 log.go:172] (0xc000a31ad0) (0xc00067e640) Stream added, broadcasting: 3\nI0125 21:40:51.737032 783 log.go:172] (0xc000a31ad0) Reply frame received for 3\nI0125 21:40:51.737054 783 log.go:172] (0xc000a31ad0) (0xc0004e1400) Create stream\nI0125 21:40:51.737060 783 log.go:172] (0xc000a31ad0) (0xc0004e1400) Stream added, broadcasting: 5\nI0125 21:40:51.737787 783 log.go:172] (0xc000a31ad0) Reply frame received for 5\nI0125 21:40:51.807372 783 log.go:172] (0xc000a31ad0) Data frame received for 3\nI0125 21:40:51.807641 783 log.go:172] (0xc00067e640) (3) Data frame handling\nI0125 21:40:51.807703 783 log.go:172] (0xc00067e640) (3) Data frame sent\nI0125 21:40:51.807973 783 log.go:172] (0xc000a31ad0) Data frame received for 5\nI0125 21:40:51.808001 783 log.go:172] (0xc0004e1400) (5) Data frame handling\nI0125 21:40:51.808043 783 log.go:172] (0xc0004e1400) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0125 21:40:51.892586 783 log.go:172] (0xc000a31ad0) (0xc00067e640) Stream removed, broadcasting: 3\nI0125 21:40:51.892980 783 log.go:172] (0xc000a31ad0) Data frame received for 1\nI0125 21:40:51.893032 783 log.go:172] (0xc000a36780) (1) Data frame handling\nI0125 21:40:51.893203 783 log.go:172] (0xc000a31ad0) (0xc0004e1400) Stream removed, broadcasting: 5\nI0125 21:40:51.893297 783 log.go:172] (0xc000a36780) (1) Data frame sent\nI0125 21:40:51.893328 783 log.go:172] (0xc000a31ad0) (0xc000a36780) Stream removed, broadcasting: 1\nI0125 21:40:51.894196 783 log.go:172] (0xc000a31ad0) Go away received\nI0125 21:40:51.894704 783 log.go:172] (0xc000a31ad0) (0xc000a36780) Stream removed, broadcasting: 1\nI0125 21:40:51.894720 783 log.go:172] (0xc000a31ad0) (0xc00067e640) Stream removed, broadcasting: 3\nI0125 21:40:51.894727 783 log.go:172] (0xc000a31ad0) (0xc0004e1400) Stream removed, broadcasting: 5\n" Jan 25 21:40:51.902: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 25 21:40:51.902: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 25 21:41:02.063: INFO: Waiting for StatefulSet statefulset-8674/ss2 to complete update Jan 25 21:41:02.063: INFO: Waiting for Pod statefulset-8674/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jan 25 21:41:02.063: INFO: Waiting for Pod statefulset-8674/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jan 25 21:41:12.077: INFO: Waiting for StatefulSet statefulset-8674/ss2 to complete update Jan 25 21:41:12.077: INFO: Waiting for Pod statefulset-8674/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jan 25 21:41:22.075: INFO: Waiting for StatefulSet statefulset-8674/ss2 to complete update Jan 25 21:41:22.075: INFO: Waiting for Pod statefulset-8674/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Jan 25 21:41:32.076: INFO: Deleting all statefulset in ns statefulset-8674 Jan 25 21:41:32.079: INFO: Scaling statefulset ss2 to 0 Jan 25 21:42:02.099: INFO: Waiting for statefulset status.replicas updated to 0 Jan 25 21:42:02.103: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 25 21:42:02.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8674" for this suite. • [SLOW TEST:192.539 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":278,"completed":64,"skipped":1003,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 25 21:42:02.180: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 25 21:42:02.259: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-1209 I0125 21:42:02.351589 8 runners.go:189] Created replication controller with name: svc-latency-rc, namespace: svc-latency-1209, replica count: 1 I0125 21:42:03.402514 8 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0125 21:42:04.402989 8 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0125 21:42:05.403453 8 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0125 21:42:06.403878 8 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0125 21:42:07.404353 8 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0125 21:42:08.405016 8 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0125 21:42:09.405570 8 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 25 21:42:09.565: INFO: Created: latency-svc-lbmzt Jan 25 21:42:09.571: INFO: Got endpoints: latency-svc-lbmzt [65.445765ms] Jan 25 21:42:09.696: INFO: Created: latency-svc-gb92g Jan 25 21:42:09.700: INFO: Got endpoints: latency-svc-gb92g [128.46975ms] Jan 25 21:42:09.788: INFO: Created: latency-svc-46scv Jan 25 21:42:09.789: INFO: Got endpoints: latency-svc-46scv [217.095244ms] Jan 25 21:42:09.917: INFO: Created: latency-svc-4xjw2 Jan 25 21:42:09.944: INFO: Got endpoints: latency-svc-4xjw2 [372.245885ms] Jan 25 21:42:09.945: INFO: Created: latency-svc-q7rfk Jan 25 21:42:10.011: INFO: Got endpoints: latency-svc-q7rfk [438.955638ms] Jan 25 21:42:10.071: INFO: Created: latency-svc-bcjs5 Jan 25 21:42:10.078: INFO: Got endpoints: latency-svc-bcjs5 [505.562418ms] Jan 25 21:42:10.109: INFO: Created: latency-svc-jtf96 Jan 25 21:42:10.151: INFO: Got endpoints: latency-svc-jtf96 [578.018145ms] Jan 25 21:42:10.153: INFO: Created: latency-svc-gjtxn Jan 25 21:42:10.869: INFO: Got endpoints: latency-svc-gjtxn [1.296302539s] Jan 25 21:42:10.886: INFO: Created: latency-svc-jdjwz Jan 25 21:42:11.071: INFO: Got endpoints: latency-svc-jdjwz [1.497701956s] Jan 25 21:42:11.126: INFO: Created: latency-svc-8f88x Jan 25 21:42:11.127: INFO: Got endpoints: latency-svc-8f88x [1.553591684s] Jan 25 21:42:11.175: INFO: Created: latency-svc-glk2z Jan 25 21:42:11.224: INFO: Got endpoints: latency-svc-glk2z [1.651007757s] Jan 25 21:42:11.243: INFO: Created: latency-svc-v2slg Jan 25 21:42:11.245: INFO: Got endpoints: latency-svc-v2slg [1.671691137s] Jan 25 21:42:11.290: INFO: Created: latency-svc-8kt2h Jan 25 21:42:11.297: INFO: Got endpoints: latency-svc-8kt2h [1.723634173s] Jan 25 21:42:11.324: INFO: Created: latency-svc-28z4t Jan 25 21:42:11.390: INFO: Got endpoints: latency-svc-28z4t [1.816575756s] Jan 25 21:42:11.436: INFO: Created: latency-svc-m7vdj Jan 25 21:42:11.542: INFO: Got endpoints: latency-svc-m7vdj [1.969441635s] Jan 25 21:42:11.543: INFO: Created: latency-svc-dww54 Jan 25 21:42:11.551: INFO: Got endpoints: latency-svc-dww54 [1.979505668s] Jan 25 21:42:11.575: INFO: Created: latency-svc-878v7 Jan 25 21:42:11.579: INFO: Got endpoints: latency-svc-878v7 [1.87898016s] Jan 25 21:42:11.600: INFO: Created: latency-svc-9l7zv Jan 25 21:42:11.605: INFO: Got endpoints: latency-svc-9l7zv [1.81565191s] Jan 25 21:42:11.626: INFO: Created: latency-svc-mc8tm Jan 25 21:42:11.704: INFO: Got endpoints: latency-svc-mc8tm [1.760035918s] Jan 25 21:42:11.707: INFO: Created: latency-svc-c2pc9 Jan 25 21:42:11.742: INFO: Got endpoints: latency-svc-c2pc9 [1.731250475s] Jan 25 21:42:11.751: INFO: Created: latency-svc-pbfbx Jan 25 21:42:11.756: INFO: Got endpoints: latency-svc-pbfbx [1.678443497s] Jan 25 21:42:11.791: INFO: Created: latency-svc-87p69 Jan 25 21:42:11.791: INFO: Got endpoints: latency-svc-87p69 [1.640016544s] Jan 25 21:42:11.959: INFO: Created: latency-svc-sm7vq Jan 25 21:42:11.981: INFO: Got endpoints: latency-svc-sm7vq [1.111970175s] Jan 25 21:42:12.006: INFO: Created: latency-svc-j9ss5 Jan 25 21:42:12.015: INFO: Got endpoints: latency-svc-j9ss5 [944.022768ms] Jan 25 21:42:12.033: INFO: Created: latency-svc-bt68f Jan 25 21:42:12.035: INFO: Got endpoints: latency-svc-bt68f [908.870174ms] Jan 25 21:42:12.054: INFO: Created: latency-svc-b5scp Jan 25 21:42:12.113: INFO: Got endpoints: latency-svc-b5scp [889.16015ms] Jan 25 21:42:12.146: INFO: Created: latency-svc-8d4qj Jan 25 21:42:12.153: INFO: Got endpoints: latency-svc-8d4qj [907.892583ms] Jan 25 21:42:12.188: INFO: Created: latency-svc-rn7df Jan 25 21:42:12.190: INFO: Got endpoints: latency-svc-rn7df [892.868006ms] Jan 25 21:42:12.334: INFO: Created: latency-svc-c8x4l Jan 25 21:42:12.426: INFO: Created: latency-svc-w4qnt Jan 25 21:42:12.539: INFO: Got endpoints: latency-svc-c8x4l [1.148888531s] Jan 25 21:42:12.540: INFO: Got endpoints: latency-svc-w4qnt [998.553934ms] Jan 25 21:42:12.588: INFO: Created: latency-svc-zcrl4 Jan 25 21:42:12.590: INFO: Got endpoints: latency-svc-zcrl4 [1.038905709s] Jan 25 21:42:12.726: INFO: Created: latency-svc-s67md Jan 25 21:42:12.768: INFO: Got endpoints: latency-svc-s67md [1.188854949s] Jan 25 21:42:12.769: INFO: Created: latency-svc-vhmmk Jan 25 21:42:12.808: INFO: Got endpoints: latency-svc-vhmmk [1.203252756s] Jan 25 21:42:12.940: INFO: Created: latency-svc-kq5w6 Jan 25 21:42:13.007: INFO: Got endpoints: latency-svc-kq5w6 [1.302452252s] Jan 25 21:42:13.010: INFO: Created: latency-svc-qxbxd Jan 25 21:42:13.017: INFO: Got endpoints: latency-svc-qxbxd [1.274369092s] Jan 25 21:42:13.155: INFO: Created: latency-svc-l6f7j Jan 25 21:42:13.167: INFO: Got endpoints: latency-svc-l6f7j [1.410870267s] Jan 25 21:42:13.335: INFO: Created: latency-svc-bcdh5 Jan 25 21:42:13.337: INFO: Got endpoints: latency-svc-bcdh5 [1.545826606s] Jan 25 21:42:13.375: INFO: Created: latency-svc-cbcdp Jan 25 21:42:13.386: INFO: Got endpoints: latency-svc-cbcdp [1.40542756s] Jan 25 21:42:13.402: INFO: Created: latency-svc-n4wvr Jan 25 21:42:13.537: INFO: Got endpoints: latency-svc-n4wvr [1.522076638s] Jan 25 21:42:13.591: INFO: Created: latency-svc-kftds Jan 25 21:42:13.591: INFO: Created: latency-svc-ggdfj Jan 25 21:42:13.603: INFO: Got endpoints: latency-svc-kftds [1.489421499s] Jan 25 21:42:13.603: INFO: Got endpoints: latency-svc-ggdfj [1.567522061s] Jan 25 21:42:13.680: INFO: Created: latency-svc-ng7k9 Jan 25 21:42:13.710: INFO: Created: latency-svc-8sr8w Jan 25 21:42:13.710: INFO: Got endpoints: latency-svc-ng7k9 [1.557183164s] Jan 25 21:42:13.715: INFO: Got endpoints: latency-svc-8sr8w [1.525070303s] Jan 25 21:42:13.849: INFO: Created: latency-svc-svnm9 Jan 25 21:42:13.885: INFO: Got endpoints: latency-svc-svnm9 [1.346322518s] Jan 25 21:42:13.891: INFO: Created: latency-svc-4s58w Jan 25 21:42:13.908: INFO: Got endpoints: latency-svc-4s58w [1.36750475s] Jan 25 21:42:14.009: INFO: Created: latency-svc-mb65f Jan 25 21:42:14.014: INFO: Got endpoints: latency-svc-mb65f [1.423057364s] Jan 25 21:42:14.056: INFO: Created: latency-svc-dn4ms Jan 25 21:42:14.062: INFO: Got endpoints: latency-svc-dn4ms [1.293164911s] Jan 25 21:42:14.096: INFO: Created: latency-svc-dj9mv Jan 25 21:42:14.188: INFO: Got endpoints: latency-svc-dj9mv [1.379689829s] Jan 25 21:42:14.195: INFO: Created: latency-svc-r8j56 Jan 25 21:42:14.198: INFO: Got endpoints: latency-svc-r8j56 [1.190460529s] Jan 25 21:42:14.237: INFO: Created: latency-svc-kbms9 Jan 25 21:42:14.251: INFO: Got endpoints: latency-svc-kbms9 [1.234776801s] Jan 25 21:42:14.348: INFO: Created: latency-svc-5bzdc Jan 25 21:42:14.352: INFO: Got endpoints: latency-svc-5bzdc [1.184503004s] Jan 25 21:42:14.379: INFO: Created: latency-svc-rvhbg Jan 25 21:42:14.386: INFO: Got endpoints: latency-svc-rvhbg [1.04906625s] Jan 25 21:42:14.544: INFO: Created: latency-svc-fsbmv Jan 25 21:42:14.551: INFO: Got endpoints: latency-svc-fsbmv [1.16427602s] Jan 25 21:42:14.585: INFO: Created: latency-svc-ksx44 Jan 25 21:42:14.607: INFO: Got endpoints: latency-svc-ksx44 [1.070046834s] Jan 25 21:42:14.627: INFO: Created: latency-svc-pcwpm Jan 25 21:42:14.687: INFO: Got endpoints: latency-svc-pcwpm [1.083362442s] Jan 25 21:42:14.691: INFO: Created: latency-svc-xlm7z Jan 25 21:42:14.693: INFO: Got endpoints: latency-svc-xlm7z [1.08879249s] Jan 25 21:42:14.724: INFO: Created: latency-svc-nk98n Jan 25 21:42:14.735: INFO: Got endpoints: latency-svc-nk98n [1.024071112s] Jan 25 21:42:14.878: INFO: Created: latency-svc-zjc8n Jan 25 21:42:14.918: INFO: Created: latency-svc-qh2rn Jan 25 21:42:14.918: INFO: Got endpoints: latency-svc-zjc8n [1.202826354s] Jan 25 21:42:14.923: INFO: Got endpoints: latency-svc-qh2rn [1.036939548s] Jan 25 21:42:14.949: INFO: Created: latency-svc-6l6t6 Jan 25 21:42:14.956: INFO: Got endpoints: latency-svc-6l6t6 [1.046707447s] Jan 25 21:42:14.975: INFO: Created: latency-svc-7tf7l Jan 25 21:42:15.050: INFO: Got endpoints: latency-svc-7tf7l [1.035926917s] Jan 25 21:42:15.055: INFO: Created: latency-svc-mcv2m Jan 25 21:42:15.069: INFO: Got endpoints: latency-svc-mcv2m [1.007047298s] Jan 25 21:42:15.083: INFO: Created: latency-svc-nxfwz Jan 25 21:42:15.085: INFO: Got endpoints: latency-svc-nxfwz [896.608666ms] Jan 25 21:42:15.098: INFO: Created: latency-svc-44brw Jan 25 21:42:15.226: INFO: Got endpoints: latency-svc-44brw [1.02748191s] Jan 25 21:42:15.253: INFO: Created: latency-svc-vhr9g Jan 25 21:42:15.260: INFO: Got endpoints: latency-svc-vhr9g [1.008848541s] Jan 25 21:42:15.285: INFO: Created: latency-svc-d644d Jan 25 21:42:15.313: INFO: Got endpoints: latency-svc-d644d [961.199019ms] Jan 25 21:42:15.315: INFO: Created: latency-svc-l2gt4 Jan 25 21:42:15.392: INFO: Got endpoints: latency-svc-l2gt4 [1.005807089s] Jan 25 21:42:15.404: INFO: Created: latency-svc-4kgtd Jan 25 21:42:15.411: INFO: Got endpoints: latency-svc-4kgtd [860.103744ms] Jan 25 21:42:15.428: INFO: Created: latency-svc-c4fmx Jan 25 21:42:15.434: INFO: Got endpoints: latency-svc-c4fmx [826.79766ms] Jan 25 21:42:15.460: INFO: Created: latency-svc-n9vk2 Jan 25 21:42:15.463: INFO: Got endpoints: latency-svc-n9vk2 [776.563162ms] Jan 25 21:42:15.689: INFO: Created: latency-svc-vhclv Jan 25 21:42:15.712: INFO: Got endpoints: latency-svc-vhclv [1.019406914s] Jan 25 21:42:15.733: INFO: Created: latency-svc-kk8cj Jan 25 21:42:15.739: INFO: Got endpoints: latency-svc-kk8cj [1.004338879s] Jan 25 21:42:15.772: INFO: Created: latency-svc-8l6mg Jan 25 21:42:15.888: INFO: Got endpoints: latency-svc-8l6mg [970.017548ms] Jan 25 21:42:15.919: INFO: Created: latency-svc-75sr6 Jan 25 21:42:15.950: INFO: Got endpoints: latency-svc-75sr6 [1.026660544s] Jan 25 21:42:15.956: INFO: Created: latency-svc-kxbf4 Jan 25 21:42:15.967: INFO: Got endpoints: latency-svc-kxbf4 [1.0106582s] Jan 25 21:42:16.081: INFO: Created: latency-svc-tpddx Jan 25 21:42:16.106: INFO: Got endpoints: latency-svc-tpddx [1.055423087s] Jan 25 21:42:16.113: INFO: Created: latency-svc-2qh72 Jan 25 21:42:16.115: INFO: Got endpoints: latency-svc-2qh72 [1.045827974s] Jan 25 21:42:16.239: INFO: Created: latency-svc-zh2hj Jan 25 21:42:16.260: INFO: Got endpoints: latency-svc-zh2hj [1.175161184s] Jan 25 21:42:16.289: INFO: Created: latency-svc-ddb8k Jan 25 21:42:16.297: INFO: Got endpoints: latency-svc-ddb8k [1.071159884s] Jan 25 21:42:16.319: INFO: Created: latency-svc-pzv49 Jan 25 21:42:16.387: INFO: Created: latency-svc-2g69m Jan 25 21:42:16.387: INFO: Got endpoints: latency-svc-pzv49 [1.126525116s] Jan 25 21:42:16.424: INFO: Got endpoints: latency-svc-2g69m [1.110721929s] Jan 25 21:42:16.434: INFO: Created: latency-svc-w9xhg Jan 25 21:42:16.435: INFO: Got endpoints: latency-svc-w9xhg [1.042843087s] Jan 25 21:42:16.474: INFO: Created: latency-svc-nczk2 Jan 25 21:42:16.593: INFO: Got endpoints: latency-svc-nczk2 [1.181293449s] Jan 25 21:42:16.610: INFO: Created: latency-svc-wkdr5 Jan 25 21:42:16.616: INFO: Got endpoints: latency-svc-wkdr5 [1.181923698s] Jan 25 21:42:16.685: INFO: Created: latency-svc-qqzsl Jan 25 21:42:16.735: INFO: Got endpoints: latency-svc-qqzsl [1.271281811s] Jan 25 21:42:16.772: INFO: Created: latency-svc-mkhm4 Jan 25 21:42:16.775: INFO: Got endpoints: latency-svc-mkhm4 [1.062281876s] Jan 25 21:42:16.798: INFO: Created: latency-svc-s6h6c Jan 25 21:42:16.811: INFO: Got endpoints: latency-svc-s6h6c [75.570437ms] Jan 25 21:42:16.830: INFO: Created: latency-svc-zzrbh Jan 25 21:42:16.898: INFO: Got endpoints: latency-svc-zzrbh [1.158250358s] Jan 25 21:42:16.921: INFO: Created: latency-svc-9ghcl Jan 25 21:42:16.945: INFO: Created: latency-svc-c8d57 Jan 25 21:42:16.946: INFO: Got endpoints: latency-svc-9ghcl [1.05823906s] Jan 25 21:42:16.948: INFO: Got endpoints: latency-svc-c8d57 [997.121429ms] Jan 25 21:42:16.972: INFO: Created: latency-svc-znrmk Jan 25 21:42:16.990: INFO: Got endpoints: latency-svc-znrmk [1.02310875s] Jan 25 21:42:17.098: INFO: Created: latency-svc-mmzgz Jan 25 21:42:17.105: INFO: Got endpoints: latency-svc-mmzgz [999.320559ms] Jan 25 21:42:17.135: INFO: Created: latency-svc-z62tz Jan 25 21:42:17.170: INFO: Got endpoints: latency-svc-z62tz [1.055605749s] Jan 25 21:42:17.173: INFO: Created: latency-svc-bhgrc Jan 25 21:42:17.184: INFO: Got endpoints: latency-svc-bhgrc [923.521902ms] Jan 25 21:42:17.263: INFO: Created: latency-svc-klk2c Jan 25 21:42:17.316: INFO: Created: latency-svc-wrpjj Jan 25 21:42:17.317: INFO: Got endpoints: latency-svc-klk2c [1.019481057s] Jan 25 21:42:17.321: INFO: Got endpoints: latency-svc-wrpjj [934.00763ms] Jan 25 21:42:17.349: INFO: Created: latency-svc-tfthk Jan 25 21:42:17.357: INFO: Got endpoints: latency-svc-tfthk [932.276173ms] Jan 25 21:42:17.406: INFO: Created: latency-svc-92s97 Jan 25 21:42:17.410: INFO: Got endpoints: latency-svc-92s97 [975.364372ms] Jan 25 21:42:17.484: INFO: Created: latency-svc-psv7t Jan 25 21:42:17.484: INFO: Got endpoints: latency-svc-psv7t [891.318835ms] Jan 25 21:42:17.566: INFO: Created: latency-svc-jhhhn Jan 25 21:42:17.604: INFO: Got endpoints: latency-svc-jhhhn [987.657969ms] Jan 25 21:42:17.610: INFO: Created: latency-svc-xmxwf Jan 25 21:42:17.620: INFO: Got endpoints: latency-svc-xmxwf [844.368461ms] Jan 25 21:42:17.634: INFO: Created: latency-svc-wqd4m Jan 25 21:42:17.639: INFO: Got endpoints: latency-svc-wqd4m [828.024178ms] Jan 25 21:42:17.660: INFO: Created: latency-svc-ws84p Jan 25 21:42:17.761: INFO: Created: latency-svc-mgknn Jan 25 21:42:17.761: INFO: Got endpoints: latency-svc-ws84p [863.126271ms] Jan 25 21:42:17.788: INFO: Got endpoints: latency-svc-mgknn [842.140201ms] Jan 25 21:42:17.809: INFO: Created: latency-svc-vsfhx Jan 25 21:42:17.937: INFO: Got endpoints: latency-svc-vsfhx [989.624748ms] Jan 25 21:42:17.944: INFO: Created: latency-svc-jv4kf Jan 25 21:42:17.984: INFO: Got endpoints: latency-svc-jv4kf [993.643246ms] Jan 25 21:42:17.989: INFO: Created: latency-svc-qvn68 Jan 25 21:42:18.000: INFO: Got endpoints: latency-svc-qvn68 [894.660755ms] Jan 25 21:42:18.070: INFO: Created: latency-svc-fjt8s Jan 25 21:42:18.075: INFO: Got endpoints: latency-svc-fjt8s [904.099576ms] Jan 25 21:42:18.095: INFO: Created: latency-svc-kgttv Jan 25 21:42:18.104: INFO: Got endpoints: latency-svc-kgttv [920.086642ms] Jan 25 21:42:18.118: INFO: Created: latency-svc-gvtzx Jan 25 21:42:18.143: INFO: Got endpoints: latency-svc-gvtzx [826.422332ms] Jan 25 21:42:18.147: INFO: Created: latency-svc-4dmcr Jan 25 21:42:18.226: INFO: Got endpoints: latency-svc-4dmcr [904.228239ms] Jan 25 21:42:18.231: INFO: Created: latency-svc-nlbbm Jan 25 21:42:18.317: INFO: Got endpoints: latency-svc-nlbbm [960.119591ms] Jan 25 21:42:18.320: INFO: Created: latency-svc-g84w9 Jan 25 21:42:18.413: INFO: Got endpoints: latency-svc-g84w9 [1.002642811s] Jan 25 21:42:18.437: INFO: Created: latency-svc-wb2pt Jan 25 21:42:18.441: INFO: Got endpoints: latency-svc-wb2pt [956.893609ms] Jan 25 21:42:18.466: INFO: Created: latency-svc-6csl2 Jan 25 21:42:18.491: INFO: Got endpoints: latency-svc-6csl2 [886.545309ms] Jan 25 21:42:18.497: INFO: Created: latency-svc-xhbfk Jan 25 21:42:18.571: INFO: Got endpoints: latency-svc-xhbfk [951.213165ms] Jan 25 21:42:18.624: INFO: Created: latency-svc-6ktt9 Jan 25 21:42:18.630: INFO: Got endpoints: latency-svc-6ktt9 [991.372127ms] Jan 25 21:42:18.667: INFO: Created: latency-svc-c9vbr Jan 25 21:42:18.667: INFO: Got endpoints: latency-svc-c9vbr [906.120851ms] Jan 25 21:42:18.786: INFO: Created: latency-svc-jw585 Jan 25 21:42:18.792: INFO: Got endpoints: latency-svc-jw585 [1.003331356s] Jan 25 21:42:18.813: INFO: Created: latency-svc-mgrkw Jan 25 21:42:18.823: INFO: Got endpoints: latency-svc-mgrkw [884.695515ms] Jan 25 21:42:18.871: INFO: Created: latency-svc-4xj5t Jan 25 21:42:18.876: INFO: Got endpoints: latency-svc-4xj5t [891.247669ms] Jan 25 21:42:18.914: INFO: Created: latency-svc-zh92f Jan 25 21:42:18.925: INFO: Got endpoints: latency-svc-zh92f [924.128279ms] Jan 25 21:42:18.943: INFO: Created: latency-svc-k5855 Jan 25 21:42:18.947: INFO: Got endpoints: latency-svc-k5855 [872.076038ms] Jan 25 21:42:18.968: INFO: Created: latency-svc-n27fb Jan 25 21:42:18.969: INFO: Got endpoints: latency-svc-n27fb [865.085601ms] Jan 25 21:42:18.990: INFO: Created: latency-svc-qfzdc Jan 25 21:42:19.088: INFO: Got endpoints: latency-svc-qfzdc [945.109405ms] Jan 25 21:42:19.091: INFO: Created: latency-svc-cx7ng Jan 25 21:42:19.112: INFO: Got endpoints: latency-svc-cx7ng [886.142969ms] Jan 25 21:42:19.115: INFO: Created: latency-svc-5hlbv Jan 25 21:42:19.121: INFO: Got endpoints: latency-svc-5hlbv [804.166383ms] Jan 25 21:42:19.145: INFO: Created: latency-svc-zcnwh Jan 25 21:42:19.160: INFO: Got endpoints: latency-svc-zcnwh [746.215919ms] Jan 25 21:42:19.179: INFO: Created: latency-svc-fgcnj Jan 25 21:42:19.237: INFO: Got endpoints: latency-svc-fgcnj [795.044992ms] Jan 25 21:42:19.248: INFO: Created: latency-svc-p2x4j Jan 25 21:42:19.255: INFO: Got endpoints: latency-svc-p2x4j [763.969877ms] Jan 25 21:42:19.276: INFO: Created: latency-svc-c9248 Jan 25 21:42:19.290: INFO: Got endpoints: latency-svc-c9248 [718.94775ms] Jan 25 21:42:19.318: INFO: Created: latency-svc-tx8mk Jan 25 21:42:19.319: INFO: Got endpoints: latency-svc-tx8mk [688.029622ms] Jan 25 21:42:19.443: INFO: Created: latency-svc-9dndv Jan 25 21:42:19.444: INFO: Got endpoints: latency-svc-9dndv [776.831665ms] Jan 25 21:42:19.488: INFO: Created: latency-svc-554nf Jan 25 21:42:19.509: INFO: Got endpoints: latency-svc-554nf [717.140643ms] Jan 25 21:42:19.529: INFO: Created: latency-svc-f548j Jan 25 21:42:19.535: INFO: Got endpoints: latency-svc-f548j [712.323088ms] Jan 25 21:42:19.587: INFO: Created: latency-svc-pzmfp Jan 25 21:42:19.615: INFO: Created: latency-svc-pz8pn Jan 25 21:42:19.615: INFO: Got endpoints: latency-svc-pzmfp [738.950524ms] Jan 25 21:42:19.647: INFO: Got endpoints: latency-svc-pz8pn [721.854629ms] Jan 25 21:42:19.671: INFO: Created: latency-svc-rc9jt Jan 25 21:42:19.678: INFO: Got endpoints: latency-svc-rc9jt [730.16575ms] Jan 25 21:42:19.783: INFO: Created: latency-svc-pd94h Jan 25 21:42:19.786: INFO: Got endpoints: latency-svc-pd94h [816.608321ms] Jan 25 21:42:19.952: INFO: Created: latency-svc-hpsb5 Jan 25 21:42:19.983: INFO: Got endpoints: latency-svc-hpsb5 [894.622082ms] Jan 25 21:42:20.034: INFO: Created: latency-svc-m7k78 Jan 25 21:42:20.047: INFO: Got endpoints: latency-svc-m7k78 [934.588262ms] Jan 25 21:42:20.156: INFO: Created: latency-svc-zvlqn Jan 25 21:42:20.159: INFO: Got endpoints: latency-svc-zvlqn [1.037731815s] Jan 25 21:42:20.217: INFO: Created: latency-svc-4q9dk Jan 25 21:42:20.244: INFO: Got endpoints: latency-svc-4q9dk [1.084512199s] Jan 25 21:42:20.306: INFO: Created: latency-svc-jdvt7 Jan 25 21:42:20.314: INFO: Got endpoints: latency-svc-jdvt7 [1.077269323s] Jan 25 21:42:20.339: INFO: Created: latency-svc-jswfc Jan 25 21:42:20.375: INFO: Got endpoints: latency-svc-jswfc [1.120136219s] Jan 25 21:42:20.382: INFO: Created: latency-svc-xzlbc Jan 25 21:42:20.387: INFO: Got endpoints: latency-svc-xzlbc [1.096461581s] Jan 25 21:42:20.515: INFO: Created: latency-svc-595r6 Jan 25 21:42:20.530: INFO: Got endpoints: latency-svc-595r6 [1.211100266s] Jan 25 21:42:20.567: INFO: Created: latency-svc-qrmpm Jan 25 21:42:20.568: INFO: Got endpoints: latency-svc-qrmpm [1.123151812s] Jan 25 21:42:20.599: INFO: Created: latency-svc-c4jw8 Jan 25 21:42:20.615: INFO: Got endpoints: latency-svc-c4jw8 [1.10565064s] Jan 25 21:42:20.668: INFO: Created: latency-svc-r6m78 Jan 25 21:42:20.673: INFO: Got endpoints: latency-svc-r6m78 [1.138284992s] Jan 25 21:42:20.710: INFO: Created: latency-svc-vtrvn Jan 25 21:42:20.729: INFO: Got endpoints: latency-svc-vtrvn [1.114299067s] Jan 25 21:42:20.820: INFO: Created: latency-svc-b8dt4 Jan 25 21:42:20.859: INFO: Got endpoints: latency-svc-b8dt4 [1.21214799s] Jan 25 21:42:20.866: INFO: Created: latency-svc-8sw9l Jan 25 21:42:20.870: INFO: Got endpoints: latency-svc-8sw9l [1.192443453s] Jan 25 21:42:20.963: INFO: Created: latency-svc-mnzqb Jan 25 21:42:20.968: INFO: Got endpoints: latency-svc-mnzqb [1.181940774s] Jan 25 21:42:20.993: INFO: Created: latency-svc-sbw8q Jan 25 21:42:21.001: INFO: Got endpoints: latency-svc-sbw8q [1.01755012s] Jan 25 21:42:21.019: INFO: Created: latency-svc-tvnp2 Jan 25 21:42:21.031: INFO: Got endpoints: latency-svc-tvnp2 [984.355627ms] Jan 25 21:42:21.057: INFO: Created: latency-svc-p4b45 Jan 25 21:42:21.091: INFO: Got endpoints: latency-svc-p4b45 [932.062411ms] Jan 25 21:42:21.096: INFO: Created: latency-svc-zws5t Jan 25 21:42:21.100: INFO: Got endpoints: latency-svc-zws5t [855.238467ms] Jan 25 21:42:21.123: INFO: Created: latency-svc-6wc2z Jan 25 21:42:21.132: INFO: Got endpoints: latency-svc-6wc2z [817.701301ms] Jan 25 21:42:21.164: INFO: Created: latency-svc-c6724 Jan 25 21:42:21.279: INFO: Got endpoints: latency-svc-c6724 [903.255802ms] Jan 25 21:42:21.297: INFO: Created: latency-svc-h8z4g Jan 25 21:42:21.304: INFO: Got endpoints: latency-svc-h8z4g [916.472271ms] Jan 25 21:42:21.335: INFO: Created: latency-svc-24ckf Jan 25 21:42:21.344: INFO: Got endpoints: latency-svc-24ckf [813.87696ms] Jan 25 21:42:21.413: INFO: Created: latency-svc-f5bfh Jan 25 21:42:21.425: INFO: Got endpoints: latency-svc-f5bfh [857.566586ms] Jan 25 21:42:21.446: INFO: Created: latency-svc-9srmg Jan 25 21:42:21.468: INFO: Got endpoints: latency-svc-9srmg [852.542242ms] Jan 25 21:42:21.472: INFO: Created: latency-svc-pqv2s Jan 25 21:42:21.482: INFO: Got endpoints: latency-svc-pqv2s [807.894764ms] Jan 25 21:42:21.538: INFO: Created: latency-svc-cgs8m Jan 25 21:42:21.542: INFO: Got endpoints: latency-svc-cgs8m [811.77999ms] Jan 25 21:42:21.619: INFO: Created: latency-svc-br5bb Jan 25 21:42:21.675: INFO: Created: latency-svc-jvsjj Jan 25 21:42:21.676: INFO: Got endpoints: latency-svc-br5bb [816.860575ms] Jan 25 21:42:21.705: INFO: Created: latency-svc-qz69p Jan 25 21:42:21.705: INFO: Got endpoints: latency-svc-jvsjj [834.847552ms] Jan 25 21:42:21.727: INFO: Got endpoints: latency-svc-qz69p [758.634979ms] Jan 25 21:42:21.776: INFO: Created: latency-svc-qhqz8 Jan 25 21:42:21.878: INFO: Got endpoints: latency-svc-qhqz8 [877.364507ms] Jan 25 21:42:21.932: INFO: Created: latency-svc-55fhs Jan 25 21:42:21.939: INFO: Got endpoints: latency-svc-55fhs [907.670148ms] Jan 25 21:42:21.962: INFO: Created: latency-svc-ttfjf Jan 25 21:42:22.048: INFO: Got endpoints: latency-svc-ttfjf [956.080665ms] Jan 25 21:42:22.061: INFO: Created: latency-svc-9p9pp Jan 25 21:42:22.065: INFO: Got endpoints: latency-svc-9p9pp [965.476633ms] Jan 25 21:42:22.139: INFO: Created: latency-svc-kcddn Jan 25 21:42:22.218: INFO: Created: latency-svc-fmwbc Jan 25 21:42:22.220: INFO: Got endpoints: latency-svc-kcddn [1.088202739s] Jan 25 21:42:22.235: INFO: Got endpoints: latency-svc-fmwbc [955.456754ms] Jan 25 21:42:22.261: INFO: Created: latency-svc-vb8bj Jan 25 21:42:22.279: INFO: Got endpoints: latency-svc-vb8bj [975.02583ms] Jan 25 21:42:22.370: INFO: Created: latency-svc-2xznx Jan 25 21:42:22.392: INFO: Got endpoints: latency-svc-2xznx [1.047370193s] Jan 25 21:42:22.396: INFO: Created: latency-svc-7kkpx Jan 25 21:42:22.410: INFO: Got endpoints: latency-svc-7kkpx [984.563322ms] Jan 25 21:42:22.435: INFO: Created: latency-svc-rzqsm Jan 25 21:42:22.436: INFO: Got endpoints: latency-svc-rzqsm [967.763616ms] Jan 25 21:42:22.554: INFO: Created: latency-svc-5p9fn Jan 25 21:42:22.698: INFO: Got endpoints: latency-svc-5p9fn [1.216173765s] Jan 25 21:42:22.700: INFO: Created: latency-svc-8dwmz Jan 25 21:42:22.706: INFO: Got endpoints: latency-svc-8dwmz [1.16415562s] Jan 25 21:42:22.735: INFO: Created: latency-svc-s2nzv Jan 25 21:42:22.737: INFO: Got endpoints: latency-svc-s2nzv [1.061376654s] Jan 25 21:42:22.774: INFO: Created: latency-svc-wwpj7 Jan 25 21:42:22.778: INFO: Got endpoints: latency-svc-wwpj7 [1.072444489s] Jan 25 21:42:22.838: INFO: Created: latency-svc-8qg5h Jan 25 21:42:22.867: INFO: Got endpoints: latency-svc-8qg5h [1.14025203s] Jan 25 21:42:22.876: INFO: Created: latency-svc-csg9v Jan 25 21:42:22.877: INFO: Got endpoints: latency-svc-csg9v [998.143223ms] Jan 25 21:42:22.893: INFO: Created: latency-svc-zcswr Jan 25 21:42:22.903: INFO: Got endpoints: latency-svc-zcswr [963.829212ms] Jan 25 21:42:22.922: INFO: Created: latency-svc-d7w6t Jan 25 21:42:22.928: INFO: Got endpoints: latency-svc-d7w6t [879.690265ms] Jan 25 21:42:23.084: INFO: Created: latency-svc-2ddcw Jan 25 21:42:23.086: INFO: Got endpoints: latency-svc-2ddcw [1.020818595s] Jan 25 21:42:23.117: INFO: Created: latency-svc-jwv74 Jan 25 21:42:23.130: INFO: Got endpoints: latency-svc-jwv74 [909.672706ms] Jan 25 21:42:23.213: INFO: Created: latency-svc-m26k8 Jan 25 21:42:23.239: INFO: Got endpoints: latency-svc-m26k8 [1.004752209s] Jan 25 21:42:23.242: INFO: Created: latency-svc-l2pnw Jan 25 21:42:23.258: INFO: Got endpoints: latency-svc-l2pnw [979.058975ms] Jan 25 21:42:23.304: INFO: Created: latency-svc-q9n9r Jan 25 21:42:23.381: INFO: Got endpoints: latency-svc-q9n9r [989.122153ms] Jan 25 21:42:23.414: INFO: Created: latency-svc-6m2xf Jan 25 21:42:23.423: INFO: Got endpoints: latency-svc-6m2xf [1.012678017s] Jan 25 21:42:23.461: INFO: Created: latency-svc-f56dg Jan 25 21:42:23.470: INFO: Got endpoints: latency-svc-f56dg [1.034067419s] Jan 25 21:42:23.598: INFO: Created: latency-svc-frlhw Jan 25 21:42:23.634: INFO: Got endpoints: latency-svc-frlhw [935.88257ms] Jan 25 21:42:23.691: INFO: Created: latency-svc-dkqdd Jan 25 21:42:23.696: INFO: Got endpoints: latency-svc-dkqdd [989.554153ms] Jan 25 21:42:23.842: INFO: Created: latency-svc-7lm4n Jan 25 21:42:23.845: INFO: Got endpoints: latency-svc-7lm4n [1.107101726s] Jan 25 21:42:23.884: INFO: Created: latency-svc-vqhdz Jan 25 21:42:23.920: INFO: Got endpoints: latency-svc-vqhdz [1.142450121s] Jan 25 21:42:24.024: INFO: Created: latency-svc-hrm7z Jan 25 21:42:24.028: INFO: Got endpoints: latency-svc-hrm7z [1.160511509s] Jan 25 21:42:24.067: INFO: Created: latency-svc-27424 Jan 25 21:42:24.074: INFO: Got endpoints: latency-svc-27424 [1.196833934s] Jan 25 21:42:24.112: INFO: Created: latency-svc-4pmxc Jan 25 21:42:24.214: INFO: Got endpoints: latency-svc-4pmxc [1.31097004s] Jan 25 21:42:24.214: INFO: Latencies: [75.570437ms 128.46975ms 217.095244ms 372.245885ms 438.955638ms 505.562418ms 578.018145ms 688.029622ms 712.323088ms 717.140643ms 718.94775ms 721.854629ms 730.16575ms 738.950524ms 746.215919ms 758.634979ms 763.969877ms 776.563162ms 776.831665ms 795.044992ms 804.166383ms 807.894764ms 811.77999ms 813.87696ms 816.608321ms 816.860575ms 817.701301ms 826.422332ms 826.79766ms 828.024178ms 834.847552ms 842.140201ms 844.368461ms 852.542242ms 855.238467ms 857.566586ms 860.103744ms 863.126271ms 865.085601ms 872.076038ms 877.364507ms 879.690265ms 884.695515ms 886.142969ms 886.545309ms 889.16015ms 891.247669ms 891.318835ms 892.868006ms 894.622082ms 894.660755ms 896.608666ms 903.255802ms 904.099576ms 904.228239ms 906.120851ms 907.670148ms 907.892583ms 908.870174ms 909.672706ms 916.472271ms 920.086642ms 923.521902ms 924.128279ms 932.062411ms 932.276173ms 934.00763ms 934.588262ms 935.88257ms 944.022768ms 945.109405ms 951.213165ms 955.456754ms 956.080665ms 956.893609ms 960.119591ms 961.199019ms 963.829212ms 965.476633ms 967.763616ms 970.017548ms 975.02583ms 975.364372ms 979.058975ms 984.355627ms 984.563322ms 987.657969ms 989.122153ms 989.554153ms 989.624748ms 991.372127ms 993.643246ms 997.121429ms 998.143223ms 998.553934ms 999.320559ms 1.002642811s 1.003331356s 1.004338879s 1.004752209s 1.005807089s 1.007047298s 1.008848541s 1.0106582s 1.012678017s 1.01755012s 1.019406914s 1.019481057s 1.020818595s 1.02310875s 1.024071112s 1.026660544s 1.02748191s 1.034067419s 1.035926917s 1.036939548s 1.037731815s 1.038905709s 1.042843087s 1.045827974s 1.046707447s 1.047370193s 1.04906625s 1.055423087s 1.055605749s 1.05823906s 1.061376654s 1.062281876s 1.070046834s 1.071159884s 1.072444489s 1.077269323s 1.083362442s 1.084512199s 1.088202739s 1.08879249s 1.096461581s 1.10565064s 1.107101726s 1.110721929s 1.111970175s 1.114299067s 1.120136219s 1.123151812s 1.126525116s 1.138284992s 1.14025203s 1.142450121s 1.148888531s 1.158250358s 1.160511509s 1.16415562s 1.16427602s 1.175161184s 1.181293449s 1.181923698s 1.181940774s 1.184503004s 1.188854949s 1.190460529s 1.192443453s 1.196833934s 1.202826354s 1.203252756s 1.211100266s 1.21214799s 1.216173765s 1.234776801s 1.271281811s 1.274369092s 1.293164911s 1.296302539s 1.302452252s 1.31097004s 1.346322518s 1.36750475s 1.379689829s 1.40542756s 1.410870267s 1.423057364s 1.489421499s 1.497701956s 1.522076638s 1.525070303s 1.545826606s 1.553591684s 1.557183164s 1.567522061s 1.640016544s 1.651007757s 1.671691137s 1.678443497s 1.723634173s 1.731250475s 1.760035918s 1.81565191s 1.816575756s 1.87898016s 1.969441635s 1.979505668s] Jan 25 21:42:24.215: INFO: 50 %ile: 1.005807089s Jan 25 21:42:24.215: INFO: 90 %ile: 1.489421499s Jan 25 21:42:24.215: INFO: 99 %ile: 1.969441635s Jan 25 21:42:24.215: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 25 21:42:24.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-1209" for this suite. • [SLOW TEST:22.044 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":278,"completed":65,"skipped":1011,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 25 21:42:24.224: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 25 21:42:24.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-3304" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":278,"completed":66,"skipped":1040,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 25 21:42:24.302: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test env composition Jan 25 21:42:24.404: INFO: Waiting up to 5m0s for pod "var-expansion-14ec1aa5-3fff-4c08-88b6-fb01d964525f" in namespace "var-expansion-7268" to be "success or failure" Jan 25 21:42:24.410: INFO: Pod "var-expansion-14ec1aa5-3fff-4c08-88b6-fb01d964525f": Phase="Pending", Reason="", readiness=false. Elapsed: 5.574542ms Jan 25 21:42:26.422: INFO: Pod "var-expansion-14ec1aa5-3fff-4c08-88b6-fb01d964525f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017407962s Jan 25 21:42:28.430: INFO: Pod "var-expansion-14ec1aa5-3fff-4c08-88b6-fb01d964525f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.0257663s Jan 25 21:42:30.456: INFO: Pod "var-expansion-14ec1aa5-3fff-4c08-88b6-fb01d964525f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.052227158s Jan 25 21:42:32.482: INFO: Pod "var-expansion-14ec1aa5-3fff-4c08-88b6-fb01d964525f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.077545295s Jan 25 21:42:34.508: INFO: Pod "var-expansion-14ec1aa5-3fff-4c08-88b6-fb01d964525f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.104245227s Jan 25 21:42:36.597: INFO: Pod "var-expansion-14ec1aa5-3fff-4c08-88b6-fb01d964525f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.19312993s STEP: Saw pod success Jan 25 21:42:36.598: INFO: Pod "var-expansion-14ec1aa5-3fff-4c08-88b6-fb01d964525f" satisfied condition "success or failure" Jan 25 21:42:36.621: INFO: Trying to get logs from node jerma-node pod var-expansion-14ec1aa5-3fff-4c08-88b6-fb01d964525f container dapi-container: STEP: delete the pod Jan 25 21:42:36.782: INFO: Waiting for pod var-expansion-14ec1aa5-3fff-4c08-88b6-fb01d964525f to disappear Jan 25 21:42:36.813: INFO: Pod var-expansion-14ec1aa5-3fff-4c08-88b6-fb01d964525f no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 25 21:42:36.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7268" for this suite. • [SLOW TEST:12.641 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":278,"completed":67,"skipped":1128,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 25 21:42:36.943: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service multi-endpoint-test in namespace services-9155 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9155 to expose endpoints map[] Jan 25 21:42:37.436: INFO: Get endpoints failed (94.593696ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Jan 25 21:42:38.443: INFO: Get endpoints failed (1.101318599s elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Jan 25 21:42:39.449: INFO: Get endpoints failed (2.107961173s elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Jan 25 21:42:40.580: INFO: successfully validated that service multi-endpoint-test in namespace services-9155 exposes endpoints map[] (3.238177964s elapsed) STEP: Creating pod pod1 in namespace services-9155 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9155 to expose endpoints map[pod1:[100]] Jan 25 21:42:45.013: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.394395259s elapsed, will retry) Jan 25 21:42:50.528: INFO: successfully validated that service multi-endpoint-test in namespace services-9155 exposes endpoints map[pod1:[100]] (9.909755837s elapsed) STEP: Creating pod pod2 in namespace services-9155 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9155 to expose endpoints map[pod1:[100] pod2:[101]] Jan 25 21:42:54.994: INFO: Unexpected endpoints: found map[283c945a-f235-4c9b-a44f-73d521fd09c4:[100]], expected map[pod1:[100] pod2:[101]] (4.397015122s elapsed, will retry) Jan 25 21:42:58.860: INFO: successfully validated that service multi-endpoint-test in namespace services-9155 exposes endpoints map[pod1:[100] pod2:[101]] (8.262803779s elapsed) STEP: Deleting pod pod1 in namespace services-9155 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9155 to expose endpoints map[pod2:[101]] Jan 25 21:42:58.929: INFO: successfully validated that service multi-endpoint-test in namespace services-9155 exposes endpoints map[pod2:[101]] (46.560108ms elapsed) STEP: Deleting pod pod2 in namespace services-9155 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9155 to expose endpoints map[] Jan 25 21:42:58.988: INFO: successfully validated that service multi-endpoint-test in namespace services-9155 exposes endpoints map[] (40.706772ms elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 25 21:42:59.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9155" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:22.219 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":278,"completed":68,"skipped":1130,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 25 21:42:59.162: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 25 21:43:01.912: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 25 21:43:04.633: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715585381, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715585381, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715585382, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715585381, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 25 21:43:06.641: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715585381, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715585381, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715585382, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715585381, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 25 21:43:08.643: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715585381, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715585381, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715585382, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715585381, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 25 21:43:10.642: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715585381, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715585381, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715585382, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715585381, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 25 21:43:13.670: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Jan 25 21:43:21.784: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config attach --namespace=webhook-3605 to-be-attached-pod -i -c=container1' Jan 25 21:43:24.019: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 25 21:43:24.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3605" for this suite. STEP: Destroying namespace "webhook-3605-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:25.100 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":278,"completed":69,"skipped":1134,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 25 21:43:24.262: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-db994a37-0b01-4835-9b07-be2c780b4740 STEP: Creating a pod to test consume secrets Jan 25 21:43:24.357: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f6b39b9e-c144-47f2-9b4b-cc49c9025cde" in namespace "projected-3044" to be "success or failure" Jan 25 21:43:24.364: INFO: Pod "pod-projected-secrets-f6b39b9e-c144-47f2-9b4b-cc49c9025cde": Phase="Pending", Reason="", readiness=false. Elapsed: 6.439104ms Jan 25 21:43:26.376: INFO: Pod "pod-projected-secrets-f6b39b9e-c144-47f2-9b4b-cc49c9025cde": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018387684s Jan 25 21:43:28.398: INFO: Pod "pod-projected-secrets-f6b39b9e-c144-47f2-9b4b-cc49c9025cde": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040565189s Jan 25 21:43:30.403: INFO: Pod "pod-projected-secrets-f6b39b9e-c144-47f2-9b4b-cc49c9025cde": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046103786s Jan 25 21:43:32.411: INFO: Pod "pod-projected-secrets-f6b39b9e-c144-47f2-9b4b-cc49c9025cde": Phase="Pending", Reason="", readiness=false. Elapsed: 8.05324561s Jan 25 21:43:34.418: INFO: Pod "pod-projected-secrets-f6b39b9e-c144-47f2-9b4b-cc49c9025cde": Phase="Pending", Reason="", readiness=false. Elapsed: 10.060448691s Jan 25 21:43:36.425: INFO: Pod "pod-projected-secrets-f6b39b9e-c144-47f2-9b4b-cc49c9025cde": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.06743462s STEP: Saw pod success Jan 25 21:43:36.425: INFO: Pod "pod-projected-secrets-f6b39b9e-c144-47f2-9b4b-cc49c9025cde" satisfied condition "success or failure" Jan 25 21:43:36.428: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-f6b39b9e-c144-47f2-9b4b-cc49c9025cde container projected-secret-volume-test: STEP: delete the pod Jan 25 21:43:36.468: INFO: Waiting for pod pod-projected-secrets-f6b39b9e-c144-47f2-9b4b-cc49c9025cde to disappear Jan 25 21:43:36.586: INFO: Pod pod-projected-secrets-f6b39b9e-c144-47f2-9b4b-cc49c9025cde no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 25 21:43:36.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3044" for this suite. • [SLOW TEST:12.347 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":70,"skipped":1141,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 25 21:43:36.613: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [BeforeEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1362 STEP: creating the pod Jan 25 21:43:36.670: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-747' Jan 25 21:43:37.129: INFO: stderr: "" Jan 25 21:43:37.129: INFO: stdout: "pod/pause created\n" Jan 25 21:43:37.129: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Jan 25 21:43:37.129: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-747" to be "running and ready" Jan 25 21:43:37.143: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 13.319852ms Jan 25 21:43:39.152: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022467194s Jan 25 21:43:41.161: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032002078s Jan 25 21:43:43.168: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039036358s Jan 25 21:43:45.176: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 8.046506424s Jan 25 21:43:45.176: INFO: Pod "pause" satisfied condition "running and ready" Jan 25 21:43:45.176: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: adding the label testing-label with value testing-label-value to a pod Jan 25 21:43:45.176: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-747' Jan 25 21:43:45.354: INFO: stderr: "" Jan 25 21:43:45.354: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Jan 25 21:43:45.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-747' Jan 25 21:43:45.483: INFO: stderr: "" Jan 25 21:43:45.483: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 8s testing-label-value\n" STEP: removing the label testing-label of a pod Jan 25 21:43:45.483: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-747' Jan 25 21:43:45.591: INFO: stderr: "" Jan 25 21:43:45.591: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Jan 25 21:43:45.591: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-747' Jan 25 21:43:45.673: INFO: stderr: "" Jan 25 21:43:45.673: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 8s \n" [AfterEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1369 STEP: using delete to clean up resources Jan 25 21:43:45.673: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-747' Jan 25 21:43:45.989: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 25 21:43:45.989: INFO: stdout: "pod \"pause\" force deleted\n" Jan 25 21:43:45.989: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-747' Jan 25 21:43:46.143: INFO: stderr: "No resources found in kubectl-747 namespace.\n" Jan 25 21:43:46.144: INFO: stdout: "" Jan 25 21:43:46.144: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-747 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 25 21:43:46.316: INFO: stderr: "" Jan 25 21:43:46.316: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 25 21:43:46.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-747" for this suite. • [SLOW TEST:9.735 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1359 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":278,"completed":71,"skipped":1186,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 25 21:43:46.348: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Jan 25 21:43:57.067: INFO: Successfully updated pod "annotationupdate731e33c0-60ee-434e-afe0-a258ac40ed68" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 25 21:43:59.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9363" for this suite. • [SLOW TEST:12.794 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":72,"skipped":1194,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 25 21:43:59.143: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Jan 25 21:43:59.232: INFO: Pod name pod-release: Found 0 pods out of 1 Jan 25 21:44:04.249: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 25 21:44:04.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-9102" for this suite. • [SLOW TEST:5.412 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":278,"completed":73,"skipped":1213,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 25 21:44:04.556: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Jan 25 21:44:13.730: INFO: 10 pods remaining Jan 25 21:44:13.730: INFO: 9 pods has nil DeletionTimestamp Jan 25 21:44:13.730: INFO: Jan 25 21:44:14.639: INFO: 0 pods remaining Jan 25 21:44:14.639: INFO: 0 pods has nil DeletionTimestamp Jan 25 21:44:14.639: INFO: STEP: Gathering metrics W0125 21:44:15.634205 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 25 21:44:15.634: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 25 21:44:15.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9319" for this suite. • [SLOW TEST:11.364 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":278,"completed":74,"skipped":1222,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 25 21:44:15.920: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0125 21:44:28.221270 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 25 21:44:28.221: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 25 21:44:28.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6602" for this suite. • [SLOW TEST:12.576 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":278,"completed":75,"skipped":1222,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 25 21:44:28.497: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Jan 25 21:44:29.276: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 25 21:44:29.487: INFO: Waiting for terminating namespaces to be deleted... Jan 25 21:44:29.505: INFO: Logging pods the kubelet thinks is on node jerma-node before test Jan 25 21:44:29.517: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded) Jan 25 21:44:29.517: INFO: Container kube-proxy ready: true, restart count 0 Jan 25 21:44:29.517: INFO: annotationupdate731e33c0-60ee-434e-afe0-a258ac40ed68 from projected-9363 started at 2020-01-25 21:43:47 +0000 UTC (1 container statuses recorded) Jan 25 21:44:29.517: INFO: Container client-container ready: true, restart count 0 Jan 25 21:44:29.517: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded) Jan 25 21:44:29.517: INFO: Container weave ready: true, restart count 1 Jan 25 21:44:29.517: INFO: Container weave-npc ready: true, restart count 0 Jan 25 21:44:29.517: INFO: Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test Jan 25 21:44:29.545: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded) Jan 25 21:44:29.545: INFO: Container kube-apiserver ready: true, restart count 1 Jan 25 21:44:29.545: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded) Jan 25 21:44:29.545: INFO: Container etcd ready: true, restart count 1 Jan 25 21:44:29.545: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded) Jan 25 21:44:29.545: INFO: Container coredns ready: true, restart count 0 Jan 25 21:44:29.545: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded) Jan 25 21:44:29.545: INFO: Container coredns ready: true, restart count 0 Jan 25 21:44:29.545: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded) Jan 25 21:44:29.545: INFO: Container kube-controller-manager ready: true, restart count 3 Jan 25 21:44:29.545: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded) Jan 25 21:44:29.545: INFO: Container kube-proxy ready: true, restart count 0 Jan 25 21:44:29.545: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded) Jan 25 21:44:29.545: INFO: Container weave ready: true, restart count 0 Jan 25 21:44:29.545: INFO: Container weave-npc ready: true, restart count 0 Jan 25 21:44:29.545: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded) Jan 25 21:44:29.545: INFO: Container kube-scheduler ready: true, restart count 3 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-893cba10-4175-4438-80be-385e2bb11f80 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-893cba10-4175-4438-80be-385e2bb11f80 off the node jerma-node STEP: verifying the node doesn't have the label kubernetes.io/e2e-893cba10-4175-4438-80be-385e2bb11f80 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 25 21:49:51.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8115" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:323.371 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":278,"completed":76,"skipped":1232,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 25 21:49:51.869: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name projected-secret-test-5936fec5-1547-436b-af6d-93d4b74fcbd2 STEP: Creating a pod to test consume secrets Jan 25 21:49:52.063: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-8c87b942-6f28-45e9-ae92-5e7034351e3a" in namespace "projected-9284" to be "success or failure" Jan 25 21:49:52.093: INFO: Pod "pod-projected-secrets-8c87b942-6f28-45e9-ae92-5e7034351e3a": Phase="Pending", Reason="", readiness=false. Elapsed: 30.352609ms Jan 25 21:49:54.105: INFO: Pod "pod-projected-secrets-8c87b942-6f28-45e9-ae92-5e7034351e3a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041871127s Jan 25 21:49:56.120: INFO: Pod "pod-projected-secrets-8c87b942-6f28-45e9-ae92-5e7034351e3a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.056979145s Jan 25 21:49:58.254: INFO: Pod "pod-projected-secrets-8c87b942-6f28-45e9-ae92-5e7034351e3a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.190981485s Jan 25 21:50:00.262: INFO: Pod "pod-projected-secrets-8c87b942-6f28-45e9-ae92-5e7034351e3a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.199446486s Jan 25 21:50:02.271: INFO: Pod "pod-projected-secrets-8c87b942-6f28-45e9-ae92-5e7034351e3a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.207964662s STEP: Saw pod success Jan 25 21:50:02.271: INFO: Pod "pod-projected-secrets-8c87b942-6f28-45e9-ae92-5e7034351e3a" satisfied condition "success or failure" Jan 25 21:50:02.275: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-8c87b942-6f28-45e9-ae92-5e7034351e3a container secret-volume-test: STEP: delete the pod Jan 25 21:50:02.387: INFO: Waiting for pod pod-projected-secrets-8c87b942-6f28-45e9-ae92-5e7034351e3a to disappear Jan 25 21:50:02.418: INFO: Pod pod-projected-secrets-8c87b942-6f28-45e9-ae92-5e7034351e3a no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 25 21:50:02.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9284" for this suite. • [SLOW TEST:10.563 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":77,"skipped":1234,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} S ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 25 21:50:02.432: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Jan 25 21:50:13.106: INFO: Successfully updated pod "pod-update-769a93d2-4784-44f5-869a-e9e694cedec6" STEP: verifying the updated pod is in kubernetes Jan 25 21:50:13.137: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 25 21:50:13.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-38" for this suite. • [SLOW TEST:10.717 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":278,"completed":78,"skipped":1235,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 25 21:50:13.149: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 25 21:50:13.262: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-9d0f772d-9dbc-4d22-b9e2-bb107eb6d76f" in namespace "security-context-test-3275" to be "success or failure" Jan 25 21:50:13.270: INFO: Pod "alpine-nnp-false-9d0f772d-9dbc-4d22-b9e2-bb107eb6d76f": Phase="Pending", Reason="", readiness=false. Elapsed: 7.952364ms Jan 25 21:50:15.277: INFO: Pod "alpine-nnp-false-9d0f772d-9dbc-4d22-b9e2-bb107eb6d76f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015258107s Jan 25 21:50:17.284: INFO: Pod "alpine-nnp-false-9d0f772d-9dbc-4d22-b9e2-bb107eb6d76f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021599638s Jan 25 21:50:19.331: INFO: Pod "alpine-nnp-false-9d0f772d-9dbc-4d22-b9e2-bb107eb6d76f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.068706061s Jan 25 21:50:21.338: INFO: Pod "alpine-nnp-false-9d0f772d-9dbc-4d22-b9e2-bb107eb6d76f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.076457563s Jan 25 21:50:23.345: INFO: Pod "alpine-nnp-false-9d0f772d-9dbc-4d22-b9e2-bb107eb6d76f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.083496634s Jan 25 21:50:23.346: INFO: Pod "alpine-nnp-false-9d0f772d-9dbc-4d22-b9e2-bb107eb6d76f" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 25 21:50:23.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-3275" for this suite. • [SLOW TEST:10.301 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when creating containers with AllowPrivilegeEscalation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:289 should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":79,"skipped":1245,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 25 21:50:23.452: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-5249.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-5249.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5249.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-5249.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-5249.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5249.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 25 21:50:35.788: INFO: DNS probes using dns-5249/dns-test-33254de3-a1ec-4cac-97e0-51d5bc5328fa succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 25 21:50:36.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5249" for this suite. • [SLOW TEST:12.592 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":278,"completed":80,"skipped":1285,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 25 21:50:36.045: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-79e08793-3089-41ab-b644-93ec980c52ea STEP: Creating a pod to test consume configMaps Jan 25 21:50:36.282: INFO: Waiting up to 5m0s for pod "pod-configmaps-f27f726f-4427-485d-9215-4bb76fb93d1d" in namespace "configmap-2139" to be "success or failure" Jan 25 21:50:36.305: INFO: Pod "pod-configmaps-f27f726f-4427-485d-9215-4bb76fb93d1d": Phase="Pending", Reason="", readiness=false. Elapsed: 22.572924ms Jan 25 21:50:38.312: INFO: Pod "pod-configmaps-f27f726f-4427-485d-9215-4bb76fb93d1d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029787838s Jan 25 21:50:40.319: INFO: Pod "pod-configmaps-f27f726f-4427-485d-9215-4bb76fb93d1d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036937453s Jan 25 21:50:42.327: INFO: Pod "pod-configmaps-f27f726f-4427-485d-9215-4bb76fb93d1d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044932029s Jan 25 21:50:44.333: INFO: Pod "pod-configmaps-f27f726f-4427-485d-9215-4bb76fb93d1d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.050976058s Jan 25 21:50:46.343: INFO: Pod "pod-configmaps-f27f726f-4427-485d-9215-4bb76fb93d1d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.060308955s Jan 25 21:50:48.353: INFO: Pod "pod-configmaps-f27f726f-4427-485d-9215-4bb76fb93d1d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.070513877s STEP: Saw pod success Jan 25 21:50:48.353: INFO: Pod "pod-configmaps-f27f726f-4427-485d-9215-4bb76fb93d1d" satisfied condition "success or failure" Jan 25 21:50:48.359: INFO: Trying to get logs from node jerma-node pod pod-configmaps-f27f726f-4427-485d-9215-4bb76fb93d1d container configmap-volume-test: STEP: delete the pod Jan 25 21:50:48.399: INFO: Waiting for pod pod-configmaps-f27f726f-4427-485d-9215-4bb76fb93d1d to disappear Jan 25 21:50:48.431: INFO: Pod pod-configmaps-f27f726f-4427-485d-9215-4bb76fb93d1d no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 25 21:50:48.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2139" for this suite. • [SLOW TEST:12.403 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":81,"skipped":1307,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 25 21:50:48.450: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-dd8ace43-e9e7-4393-8b51-6c1f51c3216c STEP: Creating a pod to test consume configMaps Jan 25 21:50:48.631: INFO: Waiting up to 5m0s for pod "pod-configmaps-66fe1055-2d4d-47d3-b54a-0a9b0e912a54" in namespace "configmap-7493" to be "success or failure" Jan 25 21:50:48.673: INFO: Pod "pod-configmaps-66fe1055-2d4d-47d3-b54a-0a9b0e912a54": Phase="Pending", Reason="", readiness=false. Elapsed: 41.917632ms Jan 25 21:50:50.685: INFO: Pod "pod-configmaps-66fe1055-2d4d-47d3-b54a-0a9b0e912a54": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05460279s Jan 25 21:50:52.694: INFO: Pod "pod-configmaps-66fe1055-2d4d-47d3-b54a-0a9b0e912a54": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063080434s Jan 25 21:50:54.700: INFO: Pod "pod-configmaps-66fe1055-2d4d-47d3-b54a-0a9b0e912a54": Phase="Pending", Reason="", readiness=false. Elapsed: 6.069158764s Jan 25 21:50:56.711: INFO: Pod "pod-configmaps-66fe1055-2d4d-47d3-b54a-0a9b0e912a54": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.079978878s STEP: Saw pod success Jan 25 21:50:56.711: INFO: Pod "pod-configmaps-66fe1055-2d4d-47d3-b54a-0a9b0e912a54" satisfied condition "success or failure" Jan 25 21:50:56.716: INFO: Trying to get logs from node jerma-node pod pod-configmaps-66fe1055-2d4d-47d3-b54a-0a9b0e912a54 container configmap-volume-test: STEP: delete the pod Jan 25 21:50:56.774: INFO: Waiting for pod pod-configmaps-66fe1055-2d4d-47d3-b54a-0a9b0e912a54 to disappear Jan 25 21:50:56.782: INFO: Pod pod-configmaps-66fe1055-2d4d-47d3-b54a-0a9b0e912a54 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 25 21:50:56.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7493" for this suite. • [SLOW TEST:8.344 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":82,"skipped":1330,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 25 21:50:56.796: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-projected-d5p8 STEP: Creating a pod to test atomic-volume-subpath Jan 25 21:50:57.085: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-d5p8" in namespace "subpath-3777" to be "success or failure" Jan 25 21:50:57.117: INFO: Pod "pod-subpath-test-projected-d5p8": Phase="Pending", Reason="", readiness=false. Elapsed: 32.762633ms Jan 25 21:50:59.130: INFO: Pod "pod-subpath-test-projected-d5p8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045618207s Jan 25 21:51:01.137: INFO: Pod "pod-subpath-test-projected-d5p8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.051990359s Jan 25 21:51:03.143: INFO: Pod "pod-subpath-test-projected-d5p8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.057908945s Jan 25 21:51:05.148: INFO: Pod "pod-subpath-test-projected-d5p8": Phase="Running", Reason="", readiness=true. Elapsed: 8.063595295s Jan 25 21:51:07.155: INFO: Pod "pod-subpath-test-projected-d5p8": Phase="Running", Reason="", readiness=true. Elapsed: 10.070187057s Jan 25 21:51:09.166: INFO: Pod "pod-subpath-test-projected-d5p8": Phase="Running", Reason="", readiness=true. Elapsed: 12.081779883s Jan 25 21:51:11.172: INFO: Pod "pod-subpath-test-projected-d5p8": Phase="Running", Reason="", readiness=true. Elapsed: 14.087861843s Jan 25 21:51:13.179: INFO: Pod "pod-subpath-test-projected-d5p8": Phase="Running", Reason="", readiness=true. Elapsed: 16.094159255s Jan 25 21:51:15.186: INFO: Pod "pod-subpath-test-projected-d5p8": Phase="Running", Reason="", readiness=true. Elapsed: 18.100892737s Jan 25 21:51:17.192: INFO: Pod "pod-subpath-test-projected-d5p8": Phase="Running", Reason="", readiness=true. Elapsed: 20.107228521s Jan 25 21:51:19.197: INFO: Pod "pod-subpath-test-projected-d5p8": Phase="Running", Reason="", readiness=true. Elapsed: 22.112627838s Jan 25 21:51:21.206: INFO: Pod "pod-subpath-test-projected-d5p8": Phase="Running", Reason="", readiness=true. Elapsed: 24.120941604s Jan 25 21:51:23.213: INFO: Pod "pod-subpath-test-projected-d5p8": Phase="Running", Reason="", readiness=true. Elapsed: 26.128460393s Jan 25 21:51:25.234: INFO: Pod "pod-subpath-test-projected-d5p8": Phase="Running", Reason="", readiness=true. Elapsed: 28.149151728s Jan 25 21:51:27.277: INFO: Pod "pod-subpath-test-projected-d5p8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.192093531s STEP: Saw pod success Jan 25 21:51:27.277: INFO: Pod "pod-subpath-test-projected-d5p8" satisfied condition "success or failure" Jan 25 21:51:27.281: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-projected-d5p8 container test-container-subpath-projected-d5p8: STEP: delete the pod Jan 25 21:51:27.350: INFO: Waiting for pod pod-subpath-test-projected-d5p8 to disappear Jan 25 21:51:27.364: INFO: Pod pod-subpath-test-projected-d5p8 no longer exists STEP: Deleting pod pod-subpath-test-projected-d5p8 Jan 25 21:51:27.364: INFO: Deleting pod "pod-subpath-test-projected-d5p8" in namespace "subpath-3777" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 25 21:51:27.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3777" for this suite. • [SLOW TEST:30.644 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":278,"completed":83,"skipped":1388,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} S ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 25 21:51:27.440: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 25 21:51:27.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8081" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":278,"completed":84,"skipped":1389,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 25 21:51:27.818: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Jan 25 21:51:28.121: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-4182 /api/v1/namespaces/watch-4182/configmaps/e2e-watch-test-resource-version bfd252ed-80ec-4632-9b65-d3b963f1fe32 4332234 0 2020-01-25 21:51:27 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jan 25 21:51:28.122: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-4182 /api/v1/namespaces/watch-4182/configmaps/e2e-watch-test-resource-version bfd252ed-80ec-4632-9b65-d3b963f1fe32 4332235 0 2020-01-25 21:51:27 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 25 21:51:28.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4182" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":278,"completed":85,"skipped":1399,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 25 21:51:28.141: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 25 21:51:50.433: INFO: Container started at 2020-01-25 21:51:34 +0000 UTC, pod became ready at 2020-01-25 21:51:49 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 25 21:51:50.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5862" for this suite. • [SLOW TEST:22.312 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":278,"completed":86,"skipped":1413,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 25 21:51:50.454: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-b7da7024-3f05-4a3c-ac01-5c402233a6b3 STEP: Creating a pod to test consume secrets Jan 25 21:51:50.595: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-fe51c243-41dc-4f6f-9ad6-27123c7a1f5f" in namespace "projected-9324" to be "success or failure" Jan 25 21:51:50.603: INFO: Pod "pod-projected-secrets-fe51c243-41dc-4f6f-9ad6-27123c7a1f5f": Phase="Pending", Reason="", readiness=false. Elapsed: 7.477963ms Jan 25 21:51:52.618: INFO: Pod "pod-projected-secrets-fe51c243-41dc-4f6f-9ad6-27123c7a1f5f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023359874s Jan 25 21:51:54.634: INFO: Pod "pod-projected-secrets-fe51c243-41dc-4f6f-9ad6-27123c7a1f5f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038803793s Jan 25 21:51:56.639: INFO: Pod "pod-projected-secrets-fe51c243-41dc-4f6f-9ad6-27123c7a1f5f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044341787s Jan 25 21:51:58.647: INFO: Pod "pod-projected-secrets-fe51c243-41dc-4f6f-9ad6-27123c7a1f5f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.052098084s Jan 25 21:52:00.657: INFO: Pod "pod-projected-secrets-fe51c243-41dc-4f6f-9ad6-27123c7a1f5f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.061516944s STEP: Saw pod success Jan 25 21:52:00.657: INFO: Pod "pod-projected-secrets-fe51c243-41dc-4f6f-9ad6-27123c7a1f5f" satisfied condition "success or failure" Jan 25 21:52:00.662: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-fe51c243-41dc-4f6f-9ad6-27123c7a1f5f container projected-secret-volume-test: STEP: delete the pod Jan 25 21:52:00.726: INFO: Waiting for pod pod-projected-secrets-fe51c243-41dc-4f6f-9ad6-27123c7a1f5f to disappear Jan 25 21:52:00.731: INFO: Pod pod-projected-secrets-fe51c243-41dc-4f6f-9ad6-27123c7a1f5f no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 25 21:52:00.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9324" for this suite. • [SLOW TEST:10.291 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":87,"skipped":1431,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 25 21:52:00.745: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-12ecdc77-76a9-4885-8e53-88a5f709bc53 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 25 21:52:13.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-441" for this suite. • [SLOW TEST:12.497 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":88,"skipped":1439,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 25 21:52:13.244: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on node default medium Jan 25 21:52:13.402: INFO: Waiting up to 5m0s for pod "pod-2903d32b-e544-4b6a-a0a2-e505ab9c53cc" in namespace "emptydir-3190" to be "success or failure" Jan 25 21:52:13.418: INFO: Pod "pod-2903d32b-e544-4b6a-a0a2-e505ab9c53cc": Phase="Pending", Reason="", readiness=false. Elapsed: 15.089113ms Jan 25 21:52:15.425: INFO: Pod "pod-2903d32b-e544-4b6a-a0a2-e505ab9c53cc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022700453s Jan 25 21:52:17.434: INFO: Pod "pod-2903d32b-e544-4b6a-a0a2-e505ab9c53cc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031835387s Jan 25 21:52:19.441: INFO: Pod "pod-2903d32b-e544-4b6a-a0a2-e505ab9c53cc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037962595s Jan 25 21:52:21.448: INFO: Pod "pod-2903d32b-e544-4b6a-a0a2-e505ab9c53cc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.045227681s Jan 25 21:52:23.455: INFO: Pod "pod-2903d32b-e544-4b6a-a0a2-e505ab9c53cc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.05245039s STEP: Saw pod success Jan 25 21:52:23.455: INFO: Pod "pod-2903d32b-e544-4b6a-a0a2-e505ab9c53cc" satisfied condition "success or failure" Jan 25 21:52:23.462: INFO: Trying to get logs from node jerma-node pod pod-2903d32b-e544-4b6a-a0a2-e505ab9c53cc container test-container: STEP: delete the pod Jan 25 21:52:23.495: INFO: Waiting for pod pod-2903d32b-e544-4b6a-a0a2-e505ab9c53cc to disappear Jan 25 21:52:23.505: INFO: Pod pod-2903d32b-e544-4b6a-a0a2-e505ab9c53cc no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 25 21:52:23.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3190" for this suite. • [SLOW TEST:10.276 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":89,"skipped":1447,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 25 21:52:23.521: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-1304da64-ef5d-4c94-8a77-9f356ea9ca2f STEP: Creating a pod to test consume configMaps Jan 25 21:52:23.664: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1690d164-c544-4f9d-aca5-d3c065579ff4" in namespace "projected-9954" to be "success or failure" Jan 25 21:52:23.713: INFO: Pod "pod-projected-configmaps-1690d164-c544-4f9d-aca5-d3c065579ff4": Phase="Pending", Reason="", readiness=false. Elapsed: 48.217662ms Jan 25 21:52:25.722: INFO: Pod "pod-projected-configmaps-1690d164-c544-4f9d-aca5-d3c065579ff4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05773349s Jan 25 21:52:27.731: INFO: Pod "pod-projected-configmaps-1690d164-c544-4f9d-aca5-d3c065579ff4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.066299957s Jan 25 21:52:29.738: INFO: Pod "pod-projected-configmaps-1690d164-c544-4f9d-aca5-d3c065579ff4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.073303051s Jan 25 21:52:31.751: INFO: Pod "pod-projected-configmaps-1690d164-c544-4f9d-aca5-d3c065579ff4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.086433948s Jan 25 21:52:33.760: INFO: Pod "pod-projected-configmaps-1690d164-c544-4f9d-aca5-d3c065579ff4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.095944544s STEP: Saw pod success Jan 25 21:52:33.761: INFO: Pod "pod-projected-configmaps-1690d164-c544-4f9d-aca5-d3c065579ff4" satisfied condition "success or failure" Jan 25 21:52:33.767: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-1690d164-c544-4f9d-aca5-d3c065579ff4 container projected-configmap-volume-test: STEP: delete the pod Jan 25 21:52:33.829: INFO: Waiting for pod pod-projected-configmaps-1690d164-c544-4f9d-aca5-d3c065579ff4 to disappear Jan 25 21:52:33.835: INFO: Pod pod-projected-configmaps-1690d164-c544-4f9d-aca5-d3c065579ff4 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 25 21:52:33.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9954" for this suite. • [SLOW TEST:10.332 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":90,"skipped":1473,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 25 21:52:33.854: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 25 21:52:34.295: INFO: Pod name rollover-pod: Found 0 pods out of 1 Jan 25 21:52:39.314: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jan 25 21:52:43.352: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Jan 25 21:52:45.360: INFO: Creating deployment "test-rollover-deployment" Jan 25 21:52:45.412: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Jan 25 21:52:47.432: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Jan 25 21:52:47.448: INFO: Ensure that both replica sets have 1 created replica Jan 25 21:52:47.461: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Jan 25 21:52:47.471: INFO: Updating deployment test-rollover-deployment Jan 25 21:52:47.471: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Jan 25 21:52:49.516: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Jan 25 21:52:49.526: INFO: Make sure deployment "test-rollover-deployment" is complete Jan 25 21:52:49.535: INFO: all replica sets need to contain the pod-template-hash label Jan 25 21:52:49.535: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715585965, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715585965, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715585968, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715585965, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 25 21:52:51.548: INFO: all replica sets need to contain the pod-template-hash label Jan 25 21:52:51.548: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715585965, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715585965, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715585968, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715585965, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 25 21:52:53.554: INFO: all replica sets need to contain the pod-template-hash label Jan 25 21:52:53.555: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715585965, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715585965, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715585968, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715585965, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 25 21:52:55.544: INFO: all replica sets need to contain the pod-template-hash label Jan 25 21:52:55.544: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715585965, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715585965, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715585968, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715585965, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 25 21:52:57.551: INFO: all replica sets need to contain the pod-template-hash label Jan 25 21:52:57.551: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715585965, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715585965, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715585976, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715585965, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 25 21:52:59.552: INFO: all replica sets need to contain the pod-template-hash label Jan 25 21:52:59.552: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715585965, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715585965, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715585976, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715585965, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 25 21:53:01.549: INFO: all replica sets need to contain the pod-template-hash label Jan 25 21:53:01.549: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715585965, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715585965, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715585976, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715585965, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 25 21:53:03.550: INFO: all replica sets need to contain the pod-template-hash label Jan 25 21:53:03.550: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715585965, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715585965, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715585976, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715585965, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 25 21:53:05.562: INFO: all replica sets need to contain the pod-template-hash label Jan 25 21:53:05.563: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715585965, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715585965, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715585976, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715585965, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 25 21:53:07.549: INFO: Jan 25 21:53:07.549: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Jan 25 21:53:07.561: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-9958 /apis/apps/v1/namespaces/deployment-9958/deployments/test-rollover-deployment 9410e9ec-8afa-4274-9fd8-40e2334c05f2 4332689 2 2020-01-25 21:52:45 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004886de8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-01-25 21:52:45 +0000 UTC,LastTransitionTime:2020-01-25 21:52:45 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-574d6dfbff" has successfully progressed.,LastUpdateTime:2020-01-25 21:53:06 +0000 UTC,LastTransitionTime:2020-01-25 21:52:45 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Jan 25 21:53:07.566: INFO: New ReplicaSet "test-rollover-deployment-574d6dfbff" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-574d6dfbff deployment-9958 /apis/apps/v1/namespaces/deployment-9958/replicasets/test-rollover-deployment-574d6dfbff 3f2c8b3d-c6b4-4fd4-a25a-676f9ebe2392 4332678 2 2020-01-25 21:52:47 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 9410e9ec-8afa-4274-9fd8-40e2334c05f2 0xc004887467 0xc004887468}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 574d6dfbff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0048874d8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jan 25 21:53:07.566: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Jan 25 21:53:07.566: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-9958 /apis/apps/v1/namespaces/deployment-9958/replicasets/test-rollover-controller 099c2deb-8a44-4132-b86b-42108b2966b6 4332688 2 2020-01-25 21:52:33 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 9410e9ec-8afa-4274-9fd8-40e2334c05f2 0xc004887357 0xc004887358}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0048873d8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 25 21:53:07.566: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c deployment-9958 /apis/apps/v1/namespaces/deployment-9958/replicasets/test-rollover-deployment-f6c94f66c 8ceff614-d712-43d2-804c-a5e74a9266d2 4332626 2 2020-01-25 21:52:45 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 9410e9ec-8afa-4274-9fd8-40e2334c05f2 0xc0048875f0 0xc0048875f1}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004887738 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 25 21:53:07.571: INFO: Pod "test-rollover-deployment-574d6dfbff-fwftd" is available: &Pod{ObjectMeta:{test-rollover-deployment-574d6dfbff-fwftd test-rollover-deployment-574d6dfbff- deployment-9958 /api/v1/namespaces/deployment-9958/pods/test-rollover-deployment-574d6dfbff-fwftd 2ac19c71-8c97-457c-9643-f44143bd1862 4332652 0 2020-01-25 21:52:47 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [{apps/v1 ReplicaSet test-rollover-deployment-574d6dfbff 3f2c8b3d-c6b4-4fd4-a25a-676f9ebe2392 0xc004887ee7 0xc004887ee8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-bpvm5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-bpvm5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-bpvm5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 21:52:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 21:52:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 21:52:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 21:52:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.2,StartTime:2020-01-25 21:52:47 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-25 21:52:55 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:docker://af95597b8c25e31dee792c7ee995834ff0922190e8e4b23d6a2a66edc5f63a27,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.2,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 25 21:53:07.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9958" for this suite. • [SLOW TEST:33.728 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":278,"completed":91,"skipped":1486,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 25 21:53:07.583: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service externalname-service with the type=ExternalName in namespace services-7351 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-7351 I0125 21:53:08.289791 8 runners.go:189] Created replication controller with name: externalname-service, namespace: services-7351, replica count: 2 I0125 21:53:11.341391 8 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0125 21:53:14.341877 8 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0125 21:53:17.342748 8 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0125 21:53:20.343515 8 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 25 21:53:20.343: INFO: Creating new exec pod Jan 25 21:53:29.397: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7351 execpodq822b -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Jan 25 21:53:32.044: INFO: stderr: "I0125 21:53:31.724776 997 log.go:172] (0xc000110580) (0xc0004b55e0) Create stream\nI0125 21:53:31.725090 997 log.go:172] (0xc000110580) (0xc0004b55e0) Stream added, broadcasting: 1\nI0125 21:53:31.733546 997 log.go:172] (0xc000110580) Reply frame received for 1\nI0125 21:53:31.733680 997 log.go:172] (0xc000110580) (0xc0008f40a0) Create stream\nI0125 21:53:31.733707 997 log.go:172] (0xc000110580) (0xc0008f40a0) Stream added, broadcasting: 3\nI0125 21:53:31.737298 997 log.go:172] (0xc000110580) Reply frame received for 3\nI0125 21:53:31.737351 997 log.go:172] (0xc000110580) (0xc000cca0a0) Create stream\nI0125 21:53:31.737363 997 log.go:172] (0xc000110580) (0xc000cca0a0) Stream added, broadcasting: 5\nI0125 21:53:31.739338 997 log.go:172] (0xc000110580) Reply frame received for 5\nI0125 21:53:31.898247 997 log.go:172] (0xc000110580) Data frame received for 5\nI0125 21:53:31.898379 997 log.go:172] (0xc000cca0a0) (5) Data frame handling\nI0125 21:53:31.898445 997 log.go:172] (0xc000cca0a0) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0125 21:53:31.925249 997 log.go:172] (0xc000110580) Data frame received for 5\nI0125 21:53:31.925341 997 log.go:172] (0xc000cca0a0) (5) Data frame handling\nI0125 21:53:31.925378 997 log.go:172] (0xc000cca0a0) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0125 21:53:32.028708 997 log.go:172] (0xc000110580) Data frame received for 1\nI0125 21:53:32.029329 997 log.go:172] (0xc000110580) (0xc000cca0a0) Stream removed, broadcasting: 5\nI0125 21:53:32.029787 997 log.go:172] (0xc0004b55e0) (1) Data frame handling\nI0125 21:53:32.029842 997 log.go:172] (0xc0004b55e0) (1) Data frame sent\nI0125 21:53:32.029874 997 log.go:172] (0xc000110580) (0xc0004b55e0) Stream removed, broadcasting: 1\nI0125 21:53:32.030032 997 log.go:172] (0xc000110580) (0xc0008f40a0) Stream removed, broadcasting: 3\nI0125 21:53:32.030133 997 log.go:172] (0xc000110580) Go away received\nI0125 21:53:32.032159 997 log.go:172] (0xc000110580) (0xc0004b55e0) Stream removed, broadcasting: 1\nI0125 21:53:32.032216 997 log.go:172] (0xc000110580) (0xc0008f40a0) Stream removed, broadcasting: 3\nI0125 21:53:32.032246 997 log.go:172] (0xc000110580) (0xc000cca0a0) Stream removed, broadcasting: 5\n" Jan 25 21:53:32.044: INFO: stdout: "" Jan 25 21:53:32.045: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7351 execpodq822b -- /bin/sh -x -c nc -zv -t -w 2 10.96.125.96 80' Jan 25 21:53:32.366: INFO: stderr: "I0125 21:53:32.207292 1030 log.go:172] (0xc000c431e0) (0xc000b76460) Create stream\nI0125 21:53:32.207952 1030 log.go:172] (0xc000c431e0) (0xc000b76460) Stream added, broadcasting: 1\nI0125 21:53:32.213496 1030 log.go:172] (0xc000c431e0) Reply frame received for 1\nI0125 21:53:32.213635 1030 log.go:172] (0xc000c431e0) (0xc000b76500) Create stream\nI0125 21:53:32.213650 1030 log.go:172] (0xc000c431e0) (0xc000b76500) Stream added, broadcasting: 3\nI0125 21:53:32.214886 1030 log.go:172] (0xc000c431e0) Reply frame received for 3\nI0125 21:53:32.214917 1030 log.go:172] (0xc000c431e0) (0xc000c3a280) Create stream\nI0125 21:53:32.214928 1030 log.go:172] (0xc000c431e0) (0xc000c3a280) Stream added, broadcasting: 5\nI0125 21:53:32.216075 1030 log.go:172] (0xc000c431e0) Reply frame received for 5\nI0125 21:53:32.280588 1030 log.go:172] (0xc000c431e0) Data frame received for 5\nI0125 21:53:32.280781 1030 log.go:172] (0xc000c3a280) (5) Data frame handling\nI0125 21:53:32.280830 1030 log.go:172] (0xc000c3a280) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.125.96 80\nI0125 21:53:32.283826 1030 log.go:172] (0xc000c431e0) Data frame received for 5\nI0125 21:53:32.283921 1030 log.go:172] (0xc000c3a280) (5) Data frame handling\nI0125 21:53:32.283970 1030 log.go:172] (0xc000c3a280) (5) Data frame sent\nConnection to 10.96.125.96 80 port [tcp/http] succeeded!\nI0125 21:53:32.354835 1030 log.go:172] (0xc000c431e0) Data frame received for 1\nI0125 21:53:32.354960 1030 log.go:172] (0xc000c431e0) (0xc000b76500) Stream removed, broadcasting: 3\nI0125 21:53:32.355037 1030 log.go:172] (0xc000b76460) (1) Data frame handling\nI0125 21:53:32.355082 1030 log.go:172] (0xc000b76460) (1) Data frame sent\nI0125 21:53:32.355101 1030 log.go:172] (0xc000c431e0) (0xc000c3a280) Stream removed, broadcasting: 5\nI0125 21:53:32.355135 1030 log.go:172] (0xc000c431e0) (0xc000b76460) Stream removed, broadcasting: 1\nI0125 21:53:32.355169 1030 log.go:172] (0xc000c431e0) Go away received\nI0125 21:53:32.356058 1030 log.go:172] (0xc000c431e0) (0xc000b76460) Stream removed, broadcasting: 1\nI0125 21:53:32.356070 1030 log.go:172] (0xc000c431e0) (0xc000b76500) Stream removed, broadcasting: 3\nI0125 21:53:32.356082 1030 log.go:172] (0xc000c431e0) (0xc000c3a280) Stream removed, broadcasting: 5\n" Jan 25 21:53:32.366: INFO: stdout: "" Jan 25 21:53:32.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7351 execpodq822b -- /bin/sh -x -c nc -zv -t -w 2 10.96.2.250 30705' Jan 25 21:53:32.702: INFO: stderr: "I0125 21:53:32.531525 1050 log.go:172] (0xc000678a50) (0xc00065a000) Create stream\nI0125 21:53:32.532155 1050 log.go:172] (0xc000678a50) (0xc00065a000) Stream added, broadcasting: 1\nI0125 21:53:32.536983 1050 log.go:172] (0xc000678a50) Reply frame received for 1\nI0125 21:53:32.537085 1050 log.go:172] (0xc000678a50) (0xc000a8a000) Create stream\nI0125 21:53:32.537106 1050 log.go:172] (0xc000678a50) (0xc000a8a000) Stream added, broadcasting: 3\nI0125 21:53:32.538462 1050 log.go:172] (0xc000678a50) Reply frame received for 3\nI0125 21:53:32.538493 1050 log.go:172] (0xc000678a50) (0xc000677ae0) Create stream\nI0125 21:53:32.538513 1050 log.go:172] (0xc000678a50) (0xc000677ae0) Stream added, broadcasting: 5\nI0125 21:53:32.539464 1050 log.go:172] (0xc000678a50) Reply frame received for 5\nI0125 21:53:32.629546 1050 log.go:172] (0xc000678a50) Data frame received for 5\nI0125 21:53:32.629634 1050 log.go:172] (0xc000677ae0) (5) Data frame handling\nI0125 21:53:32.629670 1050 log.go:172] (0xc000677ae0) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.2.250 30705\nI0125 21:53:32.630530 1050 log.go:172] (0xc000678a50) Data frame received for 5\nI0125 21:53:32.630542 1050 log.go:172] (0xc000677ae0) (5) Data frame handling\nI0125 21:53:32.630569 1050 log.go:172] (0xc000677ae0) (5) Data frame sent\nConnection to 10.96.2.250 30705 port [tcp/30705] succeeded!\nI0125 21:53:32.692976 1050 log.go:172] (0xc000678a50) Data frame received for 1\nI0125 21:53:32.693080 1050 log.go:172] (0xc00065a000) (1) Data frame handling\nI0125 21:53:32.693124 1050 log.go:172] (0xc00065a000) (1) Data frame sent\nI0125 21:53:32.693397 1050 log.go:172] (0xc000678a50) (0xc00065a000) Stream removed, broadcasting: 1\nI0125 21:53:32.694079 1050 log.go:172] (0xc000678a50) (0xc000a8a000) Stream removed, broadcasting: 3\nI0125 21:53:32.694217 1050 log.go:172] (0xc000678a50) (0xc000677ae0) Stream removed, broadcasting: 5\nI0125 21:53:32.694296 1050 log.go:172] (0xc000678a50) (0xc00065a000) Stream removed, broadcasting: 1\nI0125 21:53:32.694310 1050 log.go:172] (0xc000678a50) (0xc000a8a000) Stream removed, broadcasting: 3\nI0125 21:53:32.694337 1050 log.go:172] (0xc000678a50) Go away received\nI0125 21:53:32.694420 1050 log.go:172] (0xc000678a50) (0xc000677ae0) Stream removed, broadcasting: 5\n" Jan 25 21:53:32.702: INFO: stdout: "" Jan 25 21:53:32.702: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7351 execpodq822b -- /bin/sh -x -c nc -zv -t -w 2 10.96.1.234 30705' Jan 25 21:53:33.023: INFO: stderr: "I0125 21:53:32.874376 1070 log.go:172] (0xc000111600) (0xc0006a9a40) Create stream\nI0125 21:53:32.874528 1070 log.go:172] (0xc000111600) (0xc0006a9a40) Stream added, broadcasting: 1\nI0125 21:53:32.880550 1070 log.go:172] (0xc000111600) Reply frame received for 1\nI0125 21:53:32.880603 1070 log.go:172] (0xc000111600) (0xc0008ca000) Create stream\nI0125 21:53:32.880622 1070 log.go:172] (0xc000111600) (0xc0008ca000) Stream added, broadcasting: 3\nI0125 21:53:32.882169 1070 log.go:172] (0xc000111600) Reply frame received for 3\nI0125 21:53:32.882280 1070 log.go:172] (0xc000111600) (0xc000966000) Create stream\nI0125 21:53:32.882298 1070 log.go:172] (0xc000111600) (0xc000966000) Stream added, broadcasting: 5\nI0125 21:53:32.883752 1070 log.go:172] (0xc000111600) Reply frame received for 5\nI0125 21:53:32.946358 1070 log.go:172] (0xc000111600) Data frame received for 5\nI0125 21:53:32.946451 1070 log.go:172] (0xc000966000) (5) Data frame handling\nI0125 21:53:32.946493 1070 log.go:172] (0xc000966000) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.1.234 30705\nI0125 21:53:32.950510 1070 log.go:172] (0xc000111600) Data frame received for 5\nI0125 21:53:32.950643 1070 log.go:172] (0xc000966000) (5) Data frame handling\nI0125 21:53:32.950664 1070 log.go:172] (0xc000966000) (5) Data frame sent\nConnection to 10.96.1.234 30705 port [tcp/30705] succeeded!\nI0125 21:53:33.013554 1070 log.go:172] (0xc000111600) (0xc0008ca000) Stream removed, broadcasting: 3\nI0125 21:53:33.013763 1070 log.go:172] (0xc000111600) Data frame received for 1\nI0125 21:53:33.013795 1070 log.go:172] (0xc0006a9a40) (1) Data frame handling\nI0125 21:53:33.013820 1070 log.go:172] (0xc0006a9a40) (1) Data frame sent\nI0125 21:53:33.013836 1070 log.go:172] (0xc000111600) (0xc0006a9a40) Stream removed, broadcasting: 1\nI0125 21:53:33.014214 1070 log.go:172] (0xc000111600) (0xc000966000) Stream removed, broadcasting: 5\nI0125 21:53:33.014670 1070 log.go:172] (0xc000111600) (0xc0006a9a40) Stream removed, broadcasting: 1\nI0125 21:53:33.014687 1070 log.go:172] (0xc000111600) (0xc0008ca000) Stream removed, broadcasting: 3\nI0125 21:53:33.014695 1070 log.go:172] (0xc000111600) (0xc000966000) Stream removed, broadcasting: 5\n" Jan 25 21:53:33.023: INFO: stdout: "" Jan 25 21:53:33.023: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 25 21:53:33.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7351" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:25.548 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":278,"completed":92,"skipped":1498,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 25 21:53:33.132: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 25 21:53:33.605: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 25 21:53:35.622: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715586013, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715586013, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715586013, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715586013, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 25 21:53:37.629: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715586013, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715586013, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715586013, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715586013, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 25 21:53:39.694: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715586013, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715586013, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715586013, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715586013, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 25 21:53:41.635: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715586013, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715586013, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715586013, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715586013, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 25 21:53:43.632: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715586013, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715586013, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715586013, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715586013, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 25 21:53:45.664: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715586013, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715586013, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715586013, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715586013, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 25 21:53:47.629: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715586013, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715586013, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715586013, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715586013, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 25 21:53:50.665: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 25 21:53:50.675: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-6825-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 25 21:53:51.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4121" for this suite. STEP: Destroying namespace "webhook-4121-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:18.568 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":278,"completed":93,"skipped":1539,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} S ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 25 21:53:51.701: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override arguments Jan 25 21:53:51.875: INFO: Waiting up to 5m0s for pod "client-containers-f1c300d2-8f47-463c-a21e-ea96629961e0" in namespace "containers-1770" to be "success or failure" Jan 25 21:53:51.885: INFO: Pod "client-containers-f1c300d2-8f47-463c-a21e-ea96629961e0": Phase="Pending", Reason="", readiness=false. Elapsed: 9.998434ms Jan 25 21:53:53.897: INFO: Pod "client-containers-f1c300d2-8f47-463c-a21e-ea96629961e0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022194237s Jan 25 21:53:55.933: INFO: Pod "client-containers-f1c300d2-8f47-463c-a21e-ea96629961e0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057408335s Jan 25 21:53:57.989: INFO: Pod "client-containers-f1c300d2-8f47-463c-a21e-ea96629961e0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.113717619s Jan 25 21:54:00.008: INFO: Pod "client-containers-f1c300d2-8f47-463c-a21e-ea96629961e0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.132807695s Jan 25 21:54:02.018: INFO: Pod "client-containers-f1c300d2-8f47-463c-a21e-ea96629961e0": Phase="Pending", Reason="", readiness=false. Elapsed: 10.1428178s Jan 25 21:54:04.049: INFO: Pod "client-containers-f1c300d2-8f47-463c-a21e-ea96629961e0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.173519832s STEP: Saw pod success Jan 25 21:54:04.049: INFO: Pod "client-containers-f1c300d2-8f47-463c-a21e-ea96629961e0" satisfied condition "success or failure" Jan 25 21:54:04.052: INFO: Trying to get logs from node jerma-node pod client-containers-f1c300d2-8f47-463c-a21e-ea96629961e0 container test-container: STEP: delete the pod Jan 25 21:54:04.096: INFO: Waiting for pod client-containers-f1c300d2-8f47-463c-a21e-ea96629961e0 to disappear Jan 25 21:54:04.099: INFO: Pod client-containers-f1c300d2-8f47-463c-a21e-ea96629961e0 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 25 21:54:04.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-1770" for this suite. • [SLOW TEST:12.412 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":278,"completed":94,"skipped":1540,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 25 21:54:04.113: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: set up a multi version CRD Jan 25 21:54:04.204: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 25 21:54:19.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9350" for this suite. • [SLOW TEST:14.960 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":278,"completed":95,"skipped":1540,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 25 21:54:19.073: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 25 21:54:26.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7530" for this suite. • [SLOW TEST:7.178 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":278,"completed":96,"skipped":1548,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 25 21:54:26.253: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: validating api versions Jan 25 21:54:26.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Jan 25 21:54:26.652: INFO: stderr: "" Jan 25 21:54:26.652: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 25 21:54:26.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6454" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":278,"completed":97,"skipped":1571,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 25 21:54:26.675: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on node default medium Jan 25 21:54:26.872: INFO: Waiting up to 5m0s for pod "pod-837c1981-e8e9-4c1a-b33c-5cf4e2455f19" in namespace "emptydir-3662" to be "success or failure" Jan 25 21:54:26.885: INFO: Pod "pod-837c1981-e8e9-4c1a-b33c-5cf4e2455f19": Phase="Pending", Reason="", readiness=false. Elapsed: 13.130017ms Jan 25 21:54:28.916: INFO: Pod "pod-837c1981-e8e9-4c1a-b33c-5cf4e2455f19": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0442391s Jan 25 21:54:30.921: INFO: Pod "pod-837c1981-e8e9-4c1a-b33c-5cf4e2455f19": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049457797s Jan 25 21:54:32.930: INFO: Pod "pod-837c1981-e8e9-4c1a-b33c-5cf4e2455f19": Phase="Pending", Reason="", readiness=false. Elapsed: 6.057985558s Jan 25 21:54:34.940: INFO: Pod "pod-837c1981-e8e9-4c1a-b33c-5cf4e2455f19": Phase="Pending", Reason="", readiness=false. Elapsed: 8.067755064s Jan 25 21:54:36.950: INFO: Pod "pod-837c1981-e8e9-4c1a-b33c-5cf4e2455f19": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.077945856s STEP: Saw pod success Jan 25 21:54:36.950: INFO: Pod "pod-837c1981-e8e9-4c1a-b33c-5cf4e2455f19" satisfied condition "success or failure" Jan 25 21:54:36.953: INFO: Trying to get logs from node jerma-node pod pod-837c1981-e8e9-4c1a-b33c-5cf4e2455f19 container test-container: STEP: delete the pod Jan 25 21:54:37.024: INFO: Waiting for pod pod-837c1981-e8e9-4c1a-b33c-5cf4e2455f19 to disappear Jan 25 21:54:37.031: INFO: Pod pod-837c1981-e8e9-4c1a-b33c-5cf4e2455f19 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 25 21:54:37.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3662" for this suite. • [SLOW TEST:10.363 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":98,"skipped":1607,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 25 21:54:37.039: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 25 21:55:14.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-7303" for this suite. STEP: Destroying namespace "nsdeletetest-7875" for this suite. Jan 25 21:55:14.409: INFO: Namespace nsdeletetest-7875 was already deleted STEP: Destroying namespace "nsdeletetest-8694" for this suite. • [SLOW TEST:37.377 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":278,"completed":99,"skipped":1607,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 25 21:55:14.416: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 25 21:55:14.482: INFO: Creating ReplicaSet my-hostname-basic-0a9fe4ed-7755-44e7-a46f-f652488ece64 Jan 25 21:55:14.529: INFO: Pod name my-hostname-basic-0a9fe4ed-7755-44e7-a46f-f652488ece64: Found 0 pods out of 1 Jan 25 21:55:19.591: INFO: Pod name my-hostname-basic-0a9fe4ed-7755-44e7-a46f-f652488ece64: Found 1 pods out of 1 Jan 25 21:55:19.591: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-0a9fe4ed-7755-44e7-a46f-f652488ece64" is running Jan 25 21:55:21.610: INFO: Pod "my-hostname-basic-0a9fe4ed-7755-44e7-a46f-f652488ece64-mpgr7" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-25 21:55:14 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-25 21:55:14 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-0a9fe4ed-7755-44e7-a46f-f652488ece64]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-25 21:55:14 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-0a9fe4ed-7755-44e7-a46f-f652488ece64]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-25 21:55:14 +0000 UTC Reason: Message:}]) Jan 25 21:55:21.610: INFO: Trying to dial the pod Jan 25 21:55:26.641: INFO: Controller my-hostname-basic-0a9fe4ed-7755-44e7-a46f-f652488ece64: Got expected result from replica 1 [my-hostname-basic-0a9fe4ed-7755-44e7-a46f-f652488ece64-mpgr7]: "my-hostname-basic-0a9fe4ed-7755-44e7-a46f-f652488ece64-mpgr7", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 25 21:55:26.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-1544" for this suite. • [SLOW TEST:12.245 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":100,"skipped":1633,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 25 21:55:26.662: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jan 25 21:55:26.783: INFO: Waiting up to 5m0s for pod "downwardapi-volume-43a7e04d-08f1-4eb5-8674-40d32d49d68d" in namespace "downward-api-9796" to be "success or failure" Jan 25 21:55:26.812: INFO: Pod "downwardapi-volume-43a7e04d-08f1-4eb5-8674-40d32d49d68d": Phase="Pending", Reason="", readiness=false. Elapsed: 28.455326ms Jan 25 21:55:28.855: INFO: Pod "downwardapi-volume-43a7e04d-08f1-4eb5-8674-40d32d49d68d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071460009s Jan 25 21:55:30.863: INFO: Pod "downwardapi-volume-43a7e04d-08f1-4eb5-8674-40d32d49d68d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.078928213s Jan 25 21:55:32.875: INFO: Pod "downwardapi-volume-43a7e04d-08f1-4eb5-8674-40d32d49d68d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.09082048s Jan 25 21:55:34.882: INFO: Pod "downwardapi-volume-43a7e04d-08f1-4eb5-8674-40d32d49d68d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.098362231s Jan 25 21:55:36.892: INFO: Pod "downwardapi-volume-43a7e04d-08f1-4eb5-8674-40d32d49d68d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.1087303s Jan 25 21:55:38.897: INFO: Pod "downwardapi-volume-43a7e04d-08f1-4eb5-8674-40d32d49d68d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.112842475s STEP: Saw pod success Jan 25 21:55:38.897: INFO: Pod "downwardapi-volume-43a7e04d-08f1-4eb5-8674-40d32d49d68d" satisfied condition "success or failure" Jan 25 21:55:38.899: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-43a7e04d-08f1-4eb5-8674-40d32d49d68d container client-container: STEP: delete the pod Jan 25 21:55:39.073: INFO: Waiting for pod downwardapi-volume-43a7e04d-08f1-4eb5-8674-40d32d49d68d to disappear Jan 25 21:55:39.082: INFO: Pod downwardapi-volume-43a7e04d-08f1-4eb5-8674-40d32d49d68d no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 25 21:55:39.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9796" for this suite. • [SLOW TEST:12.429 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":101,"skipped":1636,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 25 21:55:39.092: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-c2e6d036-90b7-418e-aaf3-d7abc5e8f516 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-c2e6d036-90b7-418e-aaf3-d7abc5e8f516 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 25 21:55:51.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9673" for this suite. • [SLOW TEST:12.306 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":102,"skipped":1647,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 25 21:55:51.399: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-0312ee03-f5cb-48db-a27c-128d32ed90a0 STEP: Creating a pod to test consume secrets Jan 25 21:55:51.516: INFO: Waiting up to 5m0s for pod "pod-secrets-b357c189-dc3e-4134-8ff8-3f718327a70f" in namespace "secrets-8336" to be "success or failure" Jan 25 21:55:51.534: INFO: Pod "pod-secrets-b357c189-dc3e-4134-8ff8-3f718327a70f": Phase="Pending", Reason="", readiness=false. Elapsed: 18.086296ms Jan 25 21:55:53.541: INFO: Pod "pod-secrets-b357c189-dc3e-4134-8ff8-3f718327a70f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024846542s Jan 25 21:55:55.546: INFO: Pod "pod-secrets-b357c189-dc3e-4134-8ff8-3f718327a70f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029315035s Jan 25 21:55:57.552: INFO: Pod "pod-secrets-b357c189-dc3e-4134-8ff8-3f718327a70f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.036041936s Jan 25 21:55:59.559: INFO: Pod "pod-secrets-b357c189-dc3e-4134-8ff8-3f718327a70f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.0430274s Jan 25 21:56:01.568: INFO: Pod "pod-secrets-b357c189-dc3e-4134-8ff8-3f718327a70f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.051664596s Jan 25 21:56:03.578: INFO: Pod "pod-secrets-b357c189-dc3e-4134-8ff8-3f718327a70f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.061503483s STEP: Saw pod success Jan 25 21:56:03.578: INFO: Pod "pod-secrets-b357c189-dc3e-4134-8ff8-3f718327a70f" satisfied condition "success or failure" Jan 25 21:56:03.582: INFO: Trying to get logs from node jerma-node pod pod-secrets-b357c189-dc3e-4134-8ff8-3f718327a70f container secret-volume-test: STEP: delete the pod Jan 25 21:56:03.797: INFO: Waiting for pod pod-secrets-b357c189-dc3e-4134-8ff8-3f718327a70f to disappear Jan 25 21:56:03.864: INFO: Pod pod-secrets-b357c189-dc3e-4134-8ff8-3f718327a70f no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 25 21:56:03.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8336" for this suite. • [SLOW TEST:12.490 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":103,"skipped":1663,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 25 21:56:03.891: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:172 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating server pod server in namespace prestop-9206 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-9206 STEP: Deleting pre-stop pod Jan 25 21:56:27.198: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 25 21:56:27.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-9206" for this suite. • [SLOW TEST:23.347 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":278,"completed":104,"skipped":1695,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 25 21:56:27.241: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-834cb2a0-7327-4458-a3f6-0786f8149f9b STEP: Creating a pod to test consume secrets Jan 25 21:56:27.427: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-17e7059a-5ee3-44fe-875f-0240906230d4" in namespace "projected-290" to be "success or failure" Jan 25 21:56:27.436: INFO: Pod "pod-projected-secrets-17e7059a-5ee3-44fe-875f-0240906230d4": Phase="Pending", Reason="", readiness=false. Elapsed: 9.182242ms Jan 25 21:56:29.444: INFO: Pod "pod-projected-secrets-17e7059a-5ee3-44fe-875f-0240906230d4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016990768s Jan 25 21:56:31.455: INFO: Pod "pod-projected-secrets-17e7059a-5ee3-44fe-875f-0240906230d4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028445944s Jan 25 21:56:33.463: INFO: Pod "pod-projected-secrets-17e7059a-5ee3-44fe-875f-0240906230d4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.035798707s Jan 25 21:56:35.471: INFO: Pod "pod-projected-secrets-17e7059a-5ee3-44fe-875f-0240906230d4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.044452755s Jan 25 21:56:37.482: INFO: Pod "pod-projected-secrets-17e7059a-5ee3-44fe-875f-0240906230d4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.054978624s STEP: Saw pod success Jan 25 21:56:37.482: INFO: Pod "pod-projected-secrets-17e7059a-5ee3-44fe-875f-0240906230d4" satisfied condition "success or failure" Jan 25 21:56:37.488: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-17e7059a-5ee3-44fe-875f-0240906230d4 container projected-secret-volume-test: STEP: delete the pod Jan 25 21:56:37.532: INFO: Waiting for pod pod-projected-secrets-17e7059a-5ee3-44fe-875f-0240906230d4 to disappear Jan 25 21:56:37.613: INFO: Pod pod-projected-secrets-17e7059a-5ee3-44fe-875f-0240906230d4 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 25 21:56:37.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-290" for this suite. • [SLOW TEST:10.383 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":105,"skipped":1807,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 25 21:56:37.624: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-37cef430-d4aa-469e-83dc-ad3fbb146b1a STEP: Creating a pod to test consume configMaps Jan 25 21:56:37.687: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-5712c475-fabd-4258-ac01-d768caf86d9e" in namespace "projected-8016" to be "success or failure" Jan 25 21:56:37.716: INFO: Pod "pod-projected-configmaps-5712c475-fabd-4258-ac01-d768caf86d9e": Phase="Pending", Reason="", readiness=false. Elapsed: 28.753614ms Jan 25 21:56:39.726: INFO: Pod "pod-projected-configmaps-5712c475-fabd-4258-ac01-d768caf86d9e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038650749s Jan 25 21:56:41.735: INFO: Pod "pod-projected-configmaps-5712c475-fabd-4258-ac01-d768caf86d9e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047063552s Jan 25 21:56:43.742: INFO: Pod "pod-projected-configmaps-5712c475-fabd-4258-ac01-d768caf86d9e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.054319268s Jan 25 21:56:45.784: INFO: Pod "pod-projected-configmaps-5712c475-fabd-4258-ac01-d768caf86d9e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.096730272s STEP: Saw pod success Jan 25 21:56:45.785: INFO: Pod "pod-projected-configmaps-5712c475-fabd-4258-ac01-d768caf86d9e" satisfied condition "success or failure" Jan 25 21:56:45.809: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-5712c475-fabd-4258-ac01-d768caf86d9e container projected-configmap-volume-test: STEP: delete the pod Jan 25 21:56:45.918: INFO: Waiting for pod pod-projected-configmaps-5712c475-fabd-4258-ac01-d768caf86d9e to disappear Jan 25 21:56:45.959: INFO: Pod pod-projected-configmaps-5712c475-fabd-4258-ac01-d768caf86d9e no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 25 21:56:45.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8016" for this suite. • [SLOW TEST:8.350 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":278,"completed":106,"skipped":1834,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 25 21:56:45.974: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Jan 25 21:57:10.409: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6963 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 25 21:57:10.409: INFO: >>> kubeConfig: /root/.kube/config I0125 21:57:10.478503 8 log.go:172] (0xc00056a840) (0xc0010c0f00) Create stream I0125 21:57:10.478922 8 log.go:172] (0xc00056a840) (0xc0010c0f00) Stream added, broadcasting: 1 I0125 21:57:10.496588 8 log.go:172] (0xc00056a840) Reply frame received for 1 I0125 21:57:10.496823 8 log.go:172] (0xc00056a840) (0xc0010c10e0) Create stream I0125 21:57:10.496851 8 log.go:172] (0xc00056a840) (0xc0010c10e0) Stream added, broadcasting: 3 I0125 21:57:10.498767 8 log.go:172] (0xc00056a840) Reply frame received for 3 I0125 21:57:10.498831 8 log.go:172] (0xc00056a840) (0xc000d6b720) Create stream I0125 21:57:10.498848 8 log.go:172] (0xc00056a840) (0xc000d6b720) Stream added, broadcasting: 5 I0125 21:57:10.501184 8 log.go:172] (0xc00056a840) Reply frame received for 5 I0125 21:57:10.597428 8 log.go:172] (0xc00056a840) Data frame received for 3 I0125 21:57:10.597862 8 log.go:172] (0xc0010c10e0) (3) Data frame handling I0125 21:57:10.597937 8 log.go:172] (0xc0010c10e0) (3) Data frame sent I0125 21:57:10.718948 8 log.go:172] (0xc00056a840) Data frame received for 1 I0125 21:57:10.719194 8 log.go:172] (0xc00056a840) (0xc0010c10e0) Stream removed, broadcasting: 3 I0125 21:57:10.719318 8 log.go:172] (0xc00056a840) (0xc000d6b720) Stream removed, broadcasting: 5 I0125 21:57:10.719336 8 log.go:172] (0xc0010c0f00) (1) Data frame handling I0125 21:57:10.719360 8 log.go:172] (0xc0010c0f00) (1) Data frame sent I0125 21:57:10.719369 8 log.go:172] (0xc00056a840) (0xc0010c0f00) Stream removed, broadcasting: 1 I0125 21:57:10.719449 8 log.go:172] (0xc00056a840) Go away received I0125 21:57:10.719924 8 log.go:172] (0xc00056a840) (0xc0010c0f00) Stream removed, broadcasting: 1 I0125 21:57:10.719933 8 log.go:172] (0xc00056a840) (0xc0010c10e0) Stream removed, broadcasting: 3 I0125 21:57:10.719941 8 log.go:172] (0xc00056a840) (0xc000d6b720) Stream removed, broadcasting: 5 Jan 25 21:57:10.719: INFO: Exec stderr: "" Jan 25 21:57:10.720: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6963 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 25 21:57:10.720: INFO: >>> kubeConfig: /root/.kube/config I0125 21:57:10.771189 8 log.go:172] (0xc002b044d0) (0xc001234460) Create stream I0125 21:57:10.771442 8 log.go:172] (0xc002b044d0) (0xc001234460) Stream added, broadcasting: 1 I0125 21:57:10.774787 8 log.go:172] (0xc002b044d0) Reply frame received for 1 I0125 21:57:10.774828 8 log.go:172] (0xc002b044d0) (0xc001fe0140) Create stream I0125 21:57:10.774836 8 log.go:172] (0xc002b044d0) (0xc001fe0140) Stream added, broadcasting: 3 I0125 21:57:10.775772 8 log.go:172] (0xc002b044d0) Reply frame received for 3 I0125 21:57:10.775792 8 log.go:172] (0xc002b044d0) (0xc000d6b7c0) Create stream I0125 21:57:10.775799 8 log.go:172] (0xc002b044d0) (0xc000d6b7c0) Stream added, broadcasting: 5 I0125 21:57:10.776648 8 log.go:172] (0xc002b044d0) Reply frame received for 5 I0125 21:57:10.835221 8 log.go:172] (0xc002b044d0) Data frame received for 3 I0125 21:57:10.835332 8 log.go:172] (0xc001fe0140) (3) Data frame handling I0125 21:57:10.835367 8 log.go:172] (0xc001fe0140) (3) Data frame sent I0125 21:57:10.980564 8 log.go:172] (0xc002b044d0) Data frame received for 1 I0125 21:57:10.980759 8 log.go:172] (0xc002b044d0) (0xc000d6b7c0) Stream removed, broadcasting: 5 I0125 21:57:10.980874 8 log.go:172] (0xc001234460) (1) Data frame handling I0125 21:57:10.980919 8 log.go:172] (0xc002b044d0) (0xc001fe0140) Stream removed, broadcasting: 3 I0125 21:57:10.980953 8 log.go:172] (0xc001234460) (1) Data frame sent I0125 21:57:10.980976 8 log.go:172] (0xc002b044d0) (0xc001234460) Stream removed, broadcasting: 1 I0125 21:57:10.981007 8 log.go:172] (0xc002b044d0) Go away received I0125 21:57:10.981352 8 log.go:172] (0xc002b044d0) (0xc001234460) Stream removed, broadcasting: 1 I0125 21:57:10.981429 8 log.go:172] (0xc002b044d0) (0xc001fe0140) Stream removed, broadcasting: 3 I0125 21:57:10.981440 8 log.go:172] (0xc002b044d0) (0xc000d6b7c0) Stream removed, broadcasting: 5 Jan 25 21:57:10.981: INFO: Exec stderr: "" Jan 25 21:57:10.981: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6963 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 25 21:57:10.981: INFO: >>> kubeConfig: /root/.kube/config I0125 21:57:11.040429 8 log.go:172] (0xc002b04b00) (0xc0012346e0) Create stream I0125 21:57:11.040798 8 log.go:172] (0xc002b04b00) (0xc0012346e0) Stream added, broadcasting: 1 I0125 21:57:11.046207 8 log.go:172] (0xc002b04b00) Reply frame received for 1 I0125 21:57:11.046344 8 log.go:172] (0xc002b04b00) (0xc002a02140) Create stream I0125 21:57:11.046399 8 log.go:172] (0xc002b04b00) (0xc002a02140) Stream added, broadcasting: 3 I0125 21:57:11.048231 8 log.go:172] (0xc002b04b00) Reply frame received for 3 I0125 21:57:11.048279 8 log.go:172] (0xc002b04b00) (0xc000ade000) Create stream I0125 21:57:11.048297 8 log.go:172] (0xc002b04b00) (0xc000ade000) Stream added, broadcasting: 5 I0125 21:57:11.049642 8 log.go:172] (0xc002b04b00) Reply frame received for 5 I0125 21:57:11.125479 8 log.go:172] (0xc002b04b00) Data frame received for 3 I0125 21:57:11.125557 8 log.go:172] (0xc002a02140) (3) Data frame handling I0125 21:57:11.125576 8 log.go:172] (0xc002a02140) (3) Data frame sent I0125 21:57:11.204669 8 log.go:172] (0xc002b04b00) (0xc002a02140) Stream removed, broadcasting: 3 I0125 21:57:11.204780 8 log.go:172] (0xc002b04b00) Data frame received for 1 I0125 21:57:11.204788 8 log.go:172] (0xc0012346e0) (1) Data frame handling I0125 21:57:11.204796 8 log.go:172] (0xc0012346e0) (1) Data frame sent I0125 21:57:11.204823 8 log.go:172] (0xc002b04b00) (0xc0012346e0) Stream removed, broadcasting: 1 I0125 21:57:11.204871 8 log.go:172] (0xc002b04b00) (0xc000ade000) Stream removed, broadcasting: 5 I0125 21:57:11.204936 8 log.go:172] (0xc002b04b00) Go away received I0125 21:57:11.205060 8 log.go:172] (0xc002b04b00) (0xc0012346e0) Stream removed, broadcasting: 1 I0125 21:57:11.205073 8 log.go:172] (0xc002b04b00) (0xc002a02140) Stream removed, broadcasting: 3 I0125 21:57:11.205084 8 log.go:172] (0xc002b04b00) (0xc000ade000) Stream removed, broadcasting: 5 Jan 25 21:57:11.205: INFO: Exec stderr: "" Jan 25 21:57:11.205: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6963 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 25 21:57:11.205: INFO: >>> kubeConfig: /root/.kube/config I0125 21:57:11.241388 8 log.go:172] (0xc002c30420) (0xc002a02640) Create stream I0125 21:57:11.241560 8 log.go:172] (0xc002c30420) (0xc002a02640) Stream added, broadcasting: 1 I0125 21:57:11.245263 8 log.go:172] (0xc002c30420) Reply frame received for 1 I0125 21:57:11.245301 8 log.go:172] (0xc002c30420) (0xc000ade5a0) Create stream I0125 21:57:11.245311 8 log.go:172] (0xc002c30420) (0xc000ade5a0) Stream added, broadcasting: 3 I0125 21:57:11.246483 8 log.go:172] (0xc002c30420) Reply frame received for 3 I0125 21:57:11.246502 8 log.go:172] (0xc002c30420) (0xc0012348c0) Create stream I0125 21:57:11.246510 8 log.go:172] (0xc002c30420) (0xc0012348c0) Stream added, broadcasting: 5 I0125 21:57:11.247787 8 log.go:172] (0xc002c30420) Reply frame received for 5 I0125 21:57:11.311446 8 log.go:172] (0xc002c30420) Data frame received for 3 I0125 21:57:11.311617 8 log.go:172] (0xc000ade5a0) (3) Data frame handling I0125 21:57:11.311654 8 log.go:172] (0xc000ade5a0) (3) Data frame sent I0125 21:57:11.382647 8 log.go:172] (0xc002c30420) Data frame received for 1 I0125 21:57:11.382719 8 log.go:172] (0xc002a02640) (1) Data frame handling I0125 21:57:11.382741 8 log.go:172] (0xc002a02640) (1) Data frame sent I0125 21:57:11.383202 8 log.go:172] (0xc002c30420) (0xc002a02640) Stream removed, broadcasting: 1 I0125 21:57:11.383264 8 log.go:172] (0xc002c30420) (0xc000ade5a0) Stream removed, broadcasting: 3 I0125 21:57:11.383412 8 log.go:172] (0xc002c30420) (0xc0012348c0) Stream removed, broadcasting: 5 I0125 21:57:11.383478 8 log.go:172] (0xc002c30420) (0xc002a02640) Stream removed, broadcasting: 1 I0125 21:57:11.383492 8 log.go:172] (0xc002c30420) (0xc000ade5a0) Stream removed, broadcasting: 3 I0125 21:57:11.383500 8 log.go:172] (0xc002c30420) (0xc0012348c0) Stream removed, broadcasting: 5 Jan 25 21:57:11.383: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Jan 25 21:57:11.383: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6963 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 25 21:57:11.383: INFO: >>> kubeConfig: /root/.kube/config I0125 21:57:11.461750 8 log.go:172] (0xc00056ae70) (0xc0010c1680) Create stream I0125 21:57:11.461855 8 log.go:172] (0xc00056ae70) (0xc0010c1680) Stream added, broadcasting: 1 I0125 21:57:11.467308 8 log.go:172] (0xc00056ae70) Reply frame received for 1 I0125 21:57:11.467509 8 log.go:172] (0xc00056ae70) (0xc002a02780) Create stream I0125 21:57:11.467529 8 log.go:172] (0xc00056ae70) (0xc002a02780) Stream added, broadcasting: 3 I0125 21:57:11.469385 8 log.go:172] (0xc00056ae70) Reply frame received for 3 I0125 21:57:11.469421 8 log.go:172] (0xc00056ae70) (0xc002a028c0) Create stream I0125 21:57:11.469429 8 log.go:172] (0xc00056ae70) (0xc002a028c0) Stream added, broadcasting: 5 I0125 21:57:11.470861 8 log.go:172] (0xc00056ae70) Reply frame received for 5 I0125 21:57:11.540500 8 log.go:172] (0xc00056ae70) Data frame received for 3 I0125 21:57:11.540560 8 log.go:172] (0xc002a02780) (3) Data frame handling I0125 21:57:11.540591 8 log.go:172] (0xc002a02780) (3) Data frame sent I0125 21:57:11.615371 8 log.go:172] (0xc00056ae70) Data frame received for 1 I0125 21:57:11.615519 8 log.go:172] (0xc0010c1680) (1) Data frame handling I0125 21:57:11.615562 8 log.go:172] (0xc0010c1680) (1) Data frame sent I0125 21:57:11.615593 8 log.go:172] (0xc00056ae70) (0xc0010c1680) Stream removed, broadcasting: 1 I0125 21:57:11.615708 8 log.go:172] (0xc00056ae70) (0xc002a02780) Stream removed, broadcasting: 3 I0125 21:57:11.616458 8 log.go:172] (0xc00056ae70) (0xc002a028c0) Stream removed, broadcasting: 5 I0125 21:57:11.616508 8 log.go:172] (0xc00056ae70) (0xc0010c1680) Stream removed, broadcasting: 1 I0125 21:57:11.616520 8 log.go:172] (0xc00056ae70) (0xc002a02780) Stream removed, broadcasting: 3 I0125 21:57:11.616527 8 log.go:172] (0xc00056ae70) (0xc002a028c0) Stream removed, broadcasting: 5 Jan 25 21:57:11.617: INFO: Exec stderr: "" Jan 25 21:57:11.617: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6963 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 25 21:57:11.617: INFO: >>> kubeConfig: /root/.kube/config I0125 21:57:11.830386 8 log.go:172] (0xc002b051e0) (0xc0012352c0) Create stream I0125 21:57:11.830576 8 log.go:172] (0xc002b051e0) (0xc0012352c0) Stream added, broadcasting: 1 I0125 21:57:11.835746 8 log.go:172] (0xc002b051e0) Reply frame received for 1 I0125 21:57:11.835889 8 log.go:172] (0xc002b051e0) (0xc002a02a00) Create stream I0125 21:57:11.835920 8 log.go:172] (0xc002b051e0) (0xc002a02a00) Stream added, broadcasting: 3 I0125 21:57:11.837268 8 log.go:172] (0xc002b051e0) Reply frame received for 3 I0125 21:57:11.837289 8 log.go:172] (0xc002b051e0) (0xc001fe0280) Create stream I0125 21:57:11.837298 8 log.go:172] (0xc002b051e0) (0xc001fe0280) Stream added, broadcasting: 5 I0125 21:57:11.840149 8 log.go:172] (0xc002b051e0) Reply frame received for 5 I0125 21:57:11.940174 8 log.go:172] (0xc002b051e0) Data frame received for 3 I0125 21:57:11.940315 8 log.go:172] (0xc002a02a00) (3) Data frame handling I0125 21:57:11.940343 8 log.go:172] (0xc002a02a00) (3) Data frame sent I0125 21:57:12.028700 8 log.go:172] (0xc002b051e0) Data frame received for 1 I0125 21:57:12.028890 8 log.go:172] (0xc002b051e0) (0xc002a02a00) Stream removed, broadcasting: 3 I0125 21:57:12.029016 8 log.go:172] (0xc0012352c0) (1) Data frame handling I0125 21:57:12.029051 8 log.go:172] (0xc002b051e0) (0xc001fe0280) Stream removed, broadcasting: 5 I0125 21:57:12.029091 8 log.go:172] (0xc0012352c0) (1) Data frame sent I0125 21:57:12.029104 8 log.go:172] (0xc002b051e0) (0xc0012352c0) Stream removed, broadcasting: 1 I0125 21:57:12.029118 8 log.go:172] (0xc002b051e0) Go away received I0125 21:57:12.029421 8 log.go:172] (0xc002b051e0) (0xc0012352c0) Stream removed, broadcasting: 1 I0125 21:57:12.029436 8 log.go:172] (0xc002b051e0) (0xc002a02a00) Stream removed, broadcasting: 3 I0125 21:57:12.029448 8 log.go:172] (0xc002b051e0) (0xc001fe0280) Stream removed, broadcasting: 5 Jan 25 21:57:12.029: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Jan 25 21:57:12.029: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6963 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 25 21:57:12.029: INFO: >>> kubeConfig: /root/.kube/config I0125 21:57:12.071018 8 log.go:172] (0xc0013a0370) (0xc001fe0be0) Create stream I0125 21:57:12.071172 8 log.go:172] (0xc0013a0370) (0xc001fe0be0) Stream added, broadcasting: 1 I0125 21:57:12.075633 8 log.go:172] (0xc0013a0370) Reply frame received for 1 I0125 21:57:12.075677 8 log.go:172] (0xc0013a0370) (0xc000ade8c0) Create stream I0125 21:57:12.075686 8 log.go:172] (0xc0013a0370) (0xc000ade8c0) Stream added, broadcasting: 3 I0125 21:57:12.076835 8 log.go:172] (0xc0013a0370) Reply frame received for 3 I0125 21:57:12.076854 8 log.go:172] (0xc0013a0370) (0xc001fe0c80) Create stream I0125 21:57:12.076861 8 log.go:172] (0xc0013a0370) (0xc001fe0c80) Stream added, broadcasting: 5 I0125 21:57:12.078158 8 log.go:172] (0xc0013a0370) Reply frame received for 5 I0125 21:57:12.196484 8 log.go:172] (0xc0013a0370) Data frame received for 3 I0125 21:57:12.196874 8 log.go:172] (0xc000ade8c0) (3) Data frame handling I0125 21:57:12.196904 8 log.go:172] (0xc000ade8c0) (3) Data frame sent I0125 21:57:12.322600 8 log.go:172] (0xc0013a0370) (0xc000ade8c0) Stream removed, broadcasting: 3 I0125 21:57:12.323016 8 log.go:172] (0xc0013a0370) Data frame received for 1 I0125 21:57:12.323049 8 log.go:172] (0xc0013a0370) (0xc001fe0c80) Stream removed, broadcasting: 5 I0125 21:57:12.323156 8 log.go:172] (0xc001fe0be0) (1) Data frame handling I0125 21:57:12.323195 8 log.go:172] (0xc001fe0be0) (1) Data frame sent I0125 21:57:12.323256 8 log.go:172] (0xc0013a0370) (0xc001fe0be0) Stream removed, broadcasting: 1 I0125 21:57:12.323292 8 log.go:172] (0xc0013a0370) Go away received I0125 21:57:12.323519 8 log.go:172] (0xc0013a0370) (0xc001fe0be0) Stream removed, broadcasting: 1 I0125 21:57:12.323564 8 log.go:172] (0xc0013a0370) (0xc000ade8c0) Stream removed, broadcasting: 3 I0125 21:57:12.323585 8 log.go:172] (0xc0013a0370) (0xc001fe0c80) Stream removed, broadcasting: 5 Jan 25 21:57:12.324: INFO: Exec stderr: "" Jan 25 21:57:12.324: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6963 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 25 21:57:12.324: INFO: >>> kubeConfig: /root/.kube/config I0125 21:57:12.364675 8 log.go:172] (0xc0013a08f0) (0xc001fe0dc0) Create stream I0125 21:57:12.364863 8 log.go:172] (0xc0013a08f0) (0xc001fe0dc0) Stream added, broadcasting: 1 I0125 21:57:12.368590 8 log.go:172] (0xc0013a08f0) Reply frame received for 1 I0125 21:57:12.368622 8 log.go:172] (0xc0013a08f0) (0xc001235400) Create stream I0125 21:57:12.368632 8 log.go:172] (0xc0013a08f0) (0xc001235400) Stream added, broadcasting: 3 I0125 21:57:12.369736 8 log.go:172] (0xc0013a08f0) Reply frame received for 3 I0125 21:57:12.369777 8 log.go:172] (0xc0013a08f0) (0xc000adea00) Create stream I0125 21:57:12.369799 8 log.go:172] (0xc0013a08f0) (0xc000adea00) Stream added, broadcasting: 5 I0125 21:57:12.370821 8 log.go:172] (0xc0013a08f0) Reply frame received for 5 I0125 21:57:12.422443 8 log.go:172] (0xc0013a08f0) Data frame received for 3 I0125 21:57:12.422562 8 log.go:172] (0xc001235400) (3) Data frame handling I0125 21:57:12.422593 8 log.go:172] (0xc001235400) (3) Data frame sent I0125 21:57:12.496811 8 log.go:172] (0xc0013a08f0) Data frame received for 1 I0125 21:57:12.496991 8 log.go:172] (0xc0013a08f0) (0xc001235400) Stream removed, broadcasting: 3 I0125 21:57:12.497086 8 log.go:172] (0xc001fe0dc0) (1) Data frame handling I0125 21:57:12.497123 8 log.go:172] (0xc001fe0dc0) (1) Data frame sent I0125 21:57:12.497140 8 log.go:172] (0xc0013a08f0) (0xc000adea00) Stream removed, broadcasting: 5 I0125 21:57:12.497153 8 log.go:172] (0xc0013a08f0) (0xc001fe0dc0) Stream removed, broadcasting: 1 I0125 21:57:12.497182 8 log.go:172] (0xc0013a08f0) Go away received I0125 21:57:12.497514 8 log.go:172] (0xc0013a08f0) (0xc001fe0dc0) Stream removed, broadcasting: 1 I0125 21:57:12.497530 8 log.go:172] (0xc0013a08f0) (0xc001235400) Stream removed, broadcasting: 3 I0125 21:57:12.497570 8 log.go:172] (0xc0013a08f0) (0xc000adea00) Stream removed, broadcasting: 5 Jan 25 21:57:12.497: INFO: Exec stderr: "" Jan 25 21:57:12.497: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6963 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 25 21:57:12.497: INFO: >>> kubeConfig: /root/.kube/config I0125 21:57:12.546483 8 log.go:172] (0xc00299c4d0) (0xc000adf540) Create stream I0125 21:57:12.546916 8 log.go:172] (0xc00299c4d0) (0xc000adf540) Stream added, broadcasting: 1 I0125 21:57:12.555694 8 log.go:172] (0xc00299c4d0) Reply frame received for 1 I0125 21:57:12.555767 8 log.go:172] (0xc00299c4d0) (0xc000adf900) Create stream I0125 21:57:12.555790 8 log.go:172] (0xc00299c4d0) (0xc000adf900) Stream added, broadcasting: 3 I0125 21:57:12.556953 8 log.go:172] (0xc00299c4d0) Reply frame received for 3 I0125 21:57:12.556985 8 log.go:172] (0xc00299c4d0) (0xc002a02aa0) Create stream I0125 21:57:12.556993 8 log.go:172] (0xc00299c4d0) (0xc002a02aa0) Stream added, broadcasting: 5 I0125 21:57:12.559345 8 log.go:172] (0xc00299c4d0) Reply frame received for 5 I0125 21:57:12.625249 8 log.go:172] (0xc00299c4d0) Data frame received for 3 I0125 21:57:12.625325 8 log.go:172] (0xc000adf900) (3) Data frame handling I0125 21:57:12.625350 8 log.go:172] (0xc000adf900) (3) Data frame sent I0125 21:57:12.688721 8 log.go:172] (0xc00299c4d0) (0xc000adf900) Stream removed, broadcasting: 3 I0125 21:57:12.688920 8 log.go:172] (0xc00299c4d0) Data frame received for 1 I0125 21:57:12.688945 8 log.go:172] (0xc000adf540) (1) Data frame handling I0125 21:57:12.688967 8 log.go:172] (0xc000adf540) (1) Data frame sent I0125 21:57:12.688985 8 log.go:172] (0xc00299c4d0) (0xc000adf540) Stream removed, broadcasting: 1 I0125 21:57:12.689167 8 log.go:172] (0xc00299c4d0) (0xc002a02aa0) Stream removed, broadcasting: 5 I0125 21:57:12.689216 8 log.go:172] (0xc00299c4d0) Go away received I0125 21:57:12.689239 8 log.go:172] (0xc00299c4d0) (0xc000adf540) Stream removed, broadcasting: 1 I0125 21:57:12.689280 8 log.go:172] (0xc00299c4d0) (0xc000adf900) Stream removed, broadcasting: 3 I0125 21:57:12.689295 8 log.go:172] (0xc00299c4d0) (0xc002a02aa0) Stream removed, broadcasting: 5 Jan 25 21:57:12.689: INFO: Exec stderr: "" Jan 25 21:57:12.689: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6963 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 25 21:57:12.689: INFO: >>> kubeConfig: /root/.kube/config I0125 21:57:12.724407 8 log.go:172] (0xc00299ca50) (0xc000adfcc0) Create stream I0125 21:57:12.724582 8 log.go:172] (0xc00299ca50) (0xc000adfcc0) Stream added, broadcasting: 1 I0125 21:57:12.727645 8 log.go:172] (0xc00299ca50) Reply frame received for 1 I0125 21:57:12.727707 8 log.go:172] (0xc00299ca50) (0xc002a02b40) Create stream I0125 21:57:12.727719 8 log.go:172] (0xc00299ca50) (0xc002a02b40) Stream added, broadcasting: 3 I0125 21:57:12.728710 8 log.go:172] (0xc00299ca50) Reply frame received for 3 I0125 21:57:12.728734 8 log.go:172] (0xc00299ca50) (0xc000adfea0) Create stream I0125 21:57:12.728746 8 log.go:172] (0xc00299ca50) (0xc000adfea0) Stream added, broadcasting: 5 I0125 21:57:12.729741 8 log.go:172] (0xc00299ca50) Reply frame received for 5 I0125 21:57:12.781764 8 log.go:172] (0xc00299ca50) Data frame received for 3 I0125 21:57:12.781867 8 log.go:172] (0xc002a02b40) (3) Data frame handling I0125 21:57:12.781886 8 log.go:172] (0xc002a02b40) (3) Data frame sent I0125 21:57:12.845461 8 log.go:172] (0xc00299ca50) Data frame received for 1 I0125 21:57:12.845589 8 log.go:172] (0xc00299ca50) (0xc000adfea0) Stream removed, broadcasting: 5 I0125 21:57:12.845652 8 log.go:172] (0xc000adfcc0) (1) Data frame handling I0125 21:57:12.845666 8 log.go:172] (0xc000adfcc0) (1) Data frame sent I0125 21:57:12.845721 8 log.go:172] (0xc00299ca50) (0xc000adfcc0) Stream removed, broadcasting: 1 I0125 21:57:12.846093 8 log.go:172] (0xc00299ca50) (0xc002a02b40) Stream removed, broadcasting: 3 I0125 21:57:12.846112 8 log.go:172] (0xc00299ca50) Go away received I0125 21:57:12.846699 8 log.go:172] (0xc00299ca50) (0xc000adfcc0) Stream removed, broadcasting: 1 I0125 21:57:12.846719 8 log.go:172] (0xc00299ca50) (0xc002a02b40) Stream removed, broadcasting: 3 I0125 21:57:12.846816 8 log.go:172] (0xc00299ca50) (0xc000adfea0) Stream removed, broadcasting: 5 Jan 25 21:57:12.846: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 25 21:57:12.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-6963" for this suite. • [SLOW TEST:26.885 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":107,"skipped":1834,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 25 21:57:12.860: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-6f7b9eeb-bfc0-4ff1-beb0-fc8a0e4938b3 STEP: Creating a pod to test consume secrets Jan 25 21:57:12.949: INFO: Waiting up to 5m0s for pod "pod-secrets-67f46788-51de-4e79-9ac6-32ce661192c6" in namespace "secrets-1743" to be "success or failure" Jan 25 21:57:12.957: INFO: Pod "pod-secrets-67f46788-51de-4e79-9ac6-32ce661192c6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.704808ms Jan 25 21:57:14.961: INFO: Pod "pod-secrets-67f46788-51de-4e79-9ac6-32ce661192c6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01268796s Jan 25 21:57:16.972: INFO: Pod "pod-secrets-67f46788-51de-4e79-9ac6-32ce661192c6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023003502s Jan 25 21:57:18.977: INFO: Pod "pod-secrets-67f46788-51de-4e79-9ac6-32ce661192c6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.028463658s Jan 25 21:57:21.246: INFO: Pod "pod-secrets-67f46788-51de-4e79-9ac6-32ce661192c6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.297200411s STEP: Saw pod success Jan 25 21:57:21.246: INFO: Pod "pod-secrets-67f46788-51de-4e79-9ac6-32ce661192c6" satisfied condition "success or failure" Jan 25 21:57:21.251: INFO: Trying to get logs from node jerma-server-mvvl6gufaqub pod pod-secrets-67f46788-51de-4e79-9ac6-32ce661192c6 container secret-volume-test: STEP: delete the pod Jan 25 21:57:21.518: INFO: Waiting for pod pod-secrets-67f46788-51de-4e79-9ac6-32ce661192c6 to disappear Jan 25 21:57:21.533: INFO: Pod pod-secrets-67f46788-51de-4e79-9ac6-32ce661192c6 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 25 21:57:21.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1743" for this suite. • [SLOW TEST:8.691 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":108,"skipped":1857,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 25 21:57:21.553: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-kbqg STEP: Creating a pod to test atomic-volume-subpath Jan 25 21:57:21.696: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-kbqg" in namespace "subpath-959" to be "success or failure" Jan 25 21:57:21.714: INFO: Pod "pod-subpath-test-configmap-kbqg": Phase="Pending", Reason="", readiness=false. Elapsed: 17.895792ms Jan 25 21:57:23.725: INFO: Pod "pod-subpath-test-configmap-kbqg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0280079s Jan 25 21:57:25.733: INFO: Pod "pod-subpath-test-configmap-kbqg": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036534007s Jan 25 21:57:27.742: INFO: Pod "pod-subpath-test-configmap-kbqg": Phase="Pending", Reason="", readiness=false. Elapsed: 6.045002458s Jan 25 21:57:29.749: INFO: Pod "pod-subpath-test-configmap-kbqg": Phase="Running", Reason="", readiness=true. Elapsed: 8.052883345s Jan 25 21:57:31.755: INFO: Pod "pod-subpath-test-configmap-kbqg": Phase="Running", Reason="", readiness=true. Elapsed: 10.058738021s Jan 25 21:57:33.762: INFO: Pod "pod-subpath-test-configmap-kbqg": Phase="Running", Reason="", readiness=true. Elapsed: 12.065600922s Jan 25 21:57:35.767: INFO: Pod "pod-subpath-test-configmap-kbqg": Phase="Running", Reason="", readiness=true. Elapsed: 14.070824206s Jan 25 21:57:37.775: INFO: Pod "pod-subpath-test-configmap-kbqg": Phase="Running", Reason="", readiness=true. Elapsed: 16.078160857s Jan 25 21:57:39.782: INFO: Pod "pod-subpath-test-configmap-kbqg": Phase="Running", Reason="", readiness=true. Elapsed: 18.085124437s Jan 25 21:57:41.790: INFO: Pod "pod-subpath-test-configmap-kbqg": Phase="Running", Reason="", readiness=true. Elapsed: 20.093026894s Jan 25 21:57:43.799: INFO: Pod "pod-subpath-test-configmap-kbqg": Phase="Running", Reason="", readiness=true. Elapsed: 22.102467757s Jan 25 21:57:45.810: INFO: Pod "pod-subpath-test-configmap-kbqg": Phase="Running", Reason="", readiness=true. Elapsed: 24.11332307s Jan 25 21:57:48.655: INFO: Pod "pod-subpath-test-configmap-kbqg": Phase="Running", Reason="", readiness=true. Elapsed: 26.958622416s Jan 25 21:57:50.660: INFO: Pod "pod-subpath-test-configmap-kbqg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.963449724s STEP: Saw pod success Jan 25 21:57:50.660: INFO: Pod "pod-subpath-test-configmap-kbqg" satisfied condition "success or failure" Jan 25 21:57:50.665: INFO: Trying to get logs from node jerma-server-mvvl6gufaqub pod pod-subpath-test-configmap-kbqg container test-container-subpath-configmap-kbqg: STEP: delete the pod Jan 25 21:57:51.921: INFO: Waiting for pod pod-subpath-test-configmap-kbqg to disappear Jan 25 21:57:51.933: INFO: Pod pod-subpath-test-configmap-kbqg no longer exists STEP: Deleting pod pod-subpath-test-configmap-kbqg Jan 25 21:57:51.934: INFO: Deleting pod "pod-subpath-test-configmap-kbqg" in namespace "subpath-959" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 25 21:57:51.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-959" for this suite. • [SLOW TEST:30.593 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":278,"completed":109,"skipped":1907,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 25 21:57:52.147: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: validating cluster-info Jan 25 21:57:52.271: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Jan 25 21:57:52.425: INFO: stderr: "" Jan 25 21:57:52.425: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.193:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.193:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 25 21:57:52.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-892" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":278,"completed":110,"skipped":1922,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 25 21:57:52.437: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 25 21:58:03.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7313" for this suite. • [SLOW TEST:11.376 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":278,"completed":111,"skipped":1940,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 25 21:58:03.814: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 25 21:58:03.971: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-91805603-74be-4780-abae-de5f27c5585b" in namespace "security-context-test-1951" to be "success or failure" Jan 25 21:58:03.982: INFO: Pod "busybox-privileged-false-91805603-74be-4780-abae-de5f27c5585b": Phase="Pending", Reason="", readiness=false. Elapsed: 11.406074ms Jan 25 21:58:05.993: INFO: Pod "busybox-privileged-false-91805603-74be-4780-abae-de5f27c5585b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021846722s Jan 25 21:58:07.999: INFO: Pod "busybox-privileged-false-91805603-74be-4780-abae-de5f27c5585b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028320903s Jan 25 21:58:10.006: INFO: Pod "busybox-privileged-false-91805603-74be-4780-abae-de5f27c5585b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.034973122s Jan 25 21:58:12.013: INFO: Pod "busybox-privileged-false-91805603-74be-4780-abae-de5f27c5585b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.041748819s Jan 25 21:58:12.013: INFO: Pod "busybox-privileged-false-91805603-74be-4780-abae-de5f27c5585b" satisfied condition "success or failure" Jan 25 21:58:12.041: INFO: Got logs for pod "busybox-privileged-false-91805603-74be-4780-abae-de5f27c5585b": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 25 21:58:12.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-1951" for this suite. • [SLOW TEST:8.236 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 When creating a pod with privileged /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:225 should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":112,"skipped":1952,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 25 21:58:12.052: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 25 21:58:12.198: INFO: (0) /api/v1/nodes/jerma-node/proxy/logs/:
alternatives.log
apt/
... (200; 25.920769ms)
Jan 25 21:58:12.203: INFO: (1) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
apt/
... (200; 5.723133ms)
Jan 25 21:58:12.209: INFO: (2) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
apt/
... (200; 5.096832ms)
Jan 25 21:58:12.215: INFO: (3) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
apt/
... (200; 5.680957ms)
Jan 25 21:58:12.219: INFO: (4) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
apt/
... (200; 4.682178ms)
Jan 25 21:58:12.244: INFO: (5) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
apt/
... (200; 24.741754ms)
Jan 25 21:58:12.269: INFO: (6) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
apt/
... (200; 24.309272ms)
Jan 25 21:58:12.274: INFO: (7) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
apt/
... (200; 5.043026ms)
Jan 25 21:58:12.278: INFO: (8) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
apt/
... (200; 4.09081ms)
Jan 25 21:58:12.283: INFO: (9) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
apt/
... (200; 4.386054ms)
Jan 25 21:58:12.287: INFO: (10) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
apt/
... (200; 4.686436ms)
Jan 25 21:58:12.291: INFO: (11) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
apt/
... (200; 3.85615ms)
Jan 25 21:58:12.295: INFO: (12) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
apt/
... (200; 4.116487ms)
Jan 25 21:58:12.299: INFO: (13) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
apt/
... (200; 3.333093ms)
Jan 25 21:58:12.302: INFO: (14) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
apt/
... (200; 3.248868ms)
Jan 25 21:58:12.305: INFO: (15) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
apt/
... (200; 3.015416ms)
Jan 25 21:58:12.310: INFO: (16) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
apt/
... (200; 4.628147ms)
Jan 25 21:58:12.313: INFO: (17) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
apt/
... (200; 3.262625ms)
Jan 25 21:58:12.317: INFO: (18) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
apt/
... (200; 3.447178ms)
Jan 25 21:58:12.320: INFO: (19) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
apt/
... (200; 3.870785ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 21:58:12.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-6594" for this suite.
•{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource  [Conformance]","total":278,"completed":113,"skipped":1977,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}

------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 21:58:12.329: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 25 21:58:12.453: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Jan 25 21:58:12.467: INFO: Pod name sample-pod: Found 0 pods out of 1
Jan 25 21:58:17.474: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jan 25 21:58:21.509: INFO: Creating deployment "test-rolling-update-deployment"
Jan 25 21:58:21.520: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Jan 25 21:58:22.219: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Jan 25 21:58:24.240: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Jan 25 21:58:24.262: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715586302, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715586302, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715586302, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715586302, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 21:58:26.272: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715586302, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715586302, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715586302, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715586302, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 21:58:28.271: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715586302, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715586302, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715586302, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715586302, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 21:58:30.270: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Jan 25 21:58:30.290: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:{test-rolling-update-deployment  deployment-8220 /apis/apps/v1/namespaces/deployment-8220/deployments/test-rolling-update-deployment c76c9dcc-8998-456a-8d76-fd81c7a97b6a 4334164 1 2020-01-25 21:58:21 +0000 UTC   map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00541c6e8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-01-25 21:58:22 +0000 UTC,LastTransitionTime:2020-01-25 21:58:22 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-67cf4f6444" has successfully progressed.,LastUpdateTime:2020-01-25 21:58:29 +0000 UTC,LastTransitionTime:2020-01-25 21:58:22 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

Jan 25 21:58:30.295: INFO: New ReplicaSet "test-rolling-update-deployment-67cf4f6444" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:{test-rolling-update-deployment-67cf4f6444  deployment-8220 /apis/apps/v1/namespaces/deployment-8220/replicasets/test-rolling-update-deployment-67cf4f6444 23b01745-1c38-4ebc-9812-e165ac498801 4334154 1 2020-01-25 21:58:22 +0000 UTC   map[name:sample-pod pod-template-hash:67cf4f6444] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment c76c9dcc-8998-456a-8d76-fd81c7a97b6a 0xc0053ba9f7 0xc0053ba9f8}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 67cf4f6444,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod pod-template-hash:67cf4f6444] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0053baa68  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Jan 25 21:58:30.295: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Jan 25 21:58:30.295: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller  deployment-8220 /apis/apps/v1/namespaces/deployment-8220/replicasets/test-rolling-update-controller 8fe7006f-e6be-4b64-a681-005526426b1c 4334163 2 2020-01-25 21:58:12 +0000 UTC   map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment c76c9dcc-8998-456a-8d76-fd81c7a97b6a 0xc0053ba927 0xc0053ba928}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0053ba988  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Jan 25 21:58:30.302: INFO: Pod "test-rolling-update-deployment-67cf4f6444-fkdbv" is available:
&Pod{ObjectMeta:{test-rolling-update-deployment-67cf4f6444-fkdbv test-rolling-update-deployment-67cf4f6444- deployment-8220 /api/v1/namespaces/deployment-8220/pods/test-rolling-update-deployment-67cf4f6444-fkdbv ffeefc19-33ea-4176-a9e6-269e7ad350a6 4334153 0 2020-01-25 21:58:22 +0000 UTC   map[name:sample-pod pod-template-hash:67cf4f6444] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-67cf4f6444 23b01745-1c38-4ebc-9812-e165ac498801 0xc0053baeb7 0xc0053baeb8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gg6fs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gg6fs,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gg6fs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 21:58:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 21:58:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 21:58:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 21:58:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.2,StartTime:2020-01-25 21:58:22 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-25 21:58:28 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:docker://e14d8f00f00865773db1575d0b295b69f18ef305a51f9d064626e45e676f5dc8,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.2,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 21:58:30.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-8220" for this suite.

• [SLOW TEST:17.992 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":114,"skipped":1977,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 21:58:30.321: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir volume type on tmpfs
Jan 25 21:58:30.467: INFO: Waiting up to 5m0s for pod "pod-1a0342ae-7a5a-4174-9161-04b092500e25" in namespace "emptydir-1334" to be "success or failure"
Jan 25 21:58:30.506: INFO: Pod "pod-1a0342ae-7a5a-4174-9161-04b092500e25": Phase="Pending", Reason="", readiness=false. Elapsed: 39.010355ms
Jan 25 21:58:32.517: INFO: Pod "pod-1a0342ae-7a5a-4174-9161-04b092500e25": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049773219s
Jan 25 21:58:34.529: INFO: Pod "pod-1a0342ae-7a5a-4174-9161-04b092500e25": Phase="Pending", Reason="", readiness=false. Elapsed: 4.061259696s
Jan 25 21:58:36.538: INFO: Pod "pod-1a0342ae-7a5a-4174-9161-04b092500e25": Phase="Pending", Reason="", readiness=false. Elapsed: 6.070414265s
Jan 25 21:58:38.558: INFO: Pod "pod-1a0342ae-7a5a-4174-9161-04b092500e25": Phase="Pending", Reason="", readiness=false. Elapsed: 8.090375916s
Jan 25 21:58:40.602: INFO: Pod "pod-1a0342ae-7a5a-4174-9161-04b092500e25": Phase="Pending", Reason="", readiness=false. Elapsed: 10.135100278s
Jan 25 21:58:42.620: INFO: Pod "pod-1a0342ae-7a5a-4174-9161-04b092500e25": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.152760428s
STEP: Saw pod success
Jan 25 21:58:42.621: INFO: Pod "pod-1a0342ae-7a5a-4174-9161-04b092500e25" satisfied condition "success or failure"
Jan 25 21:58:42.636: INFO: Trying to get logs from node jerma-node pod pod-1a0342ae-7a5a-4174-9161-04b092500e25 container test-container: 
STEP: delete the pod
Jan 25 21:58:42.703: INFO: Waiting for pod pod-1a0342ae-7a5a-4174-9161-04b092500e25 to disappear
Jan 25 21:58:42.765: INFO: Pod pod-1a0342ae-7a5a-4174-9161-04b092500e25 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 21:58:42.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1334" for this suite.

• [SLOW TEST:12.487 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":115,"skipped":1981,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
[k8s.io] Probing container 
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 21:58:42.809: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod liveness-7dc1d54d-82ec-4a38-a856-6de04912f9d8 in namespace container-probe-8005
Jan 25 21:58:51.027: INFO: Started pod liveness-7dc1d54d-82ec-4a38-a856-6de04912f9d8 in namespace container-probe-8005
STEP: checking the pod's current state and verifying that restartCount is present
Jan 25 21:58:51.112: INFO: Initial restart count of pod liveness-7dc1d54d-82ec-4a38-a856-6de04912f9d8 is 0
Jan 25 21:59:07.197: INFO: Restart count of pod container-probe-8005/liveness-7dc1d54d-82ec-4a38-a856-6de04912f9d8 is now 1 (16.085157265s elapsed)
Jan 25 21:59:29.331: INFO: Restart count of pod container-probe-8005/liveness-7dc1d54d-82ec-4a38-a856-6de04912f9d8 is now 2 (38.218591778s elapsed)
Jan 25 21:59:49.446: INFO: Restart count of pod container-probe-8005/liveness-7dc1d54d-82ec-4a38-a856-6de04912f9d8 is now 3 (58.334145533s elapsed)
Jan 25 22:00:07.518: INFO: Restart count of pod container-probe-8005/liveness-7dc1d54d-82ec-4a38-a856-6de04912f9d8 is now 4 (1m16.405651342s elapsed)
Jan 25 22:01:09.939: INFO: Restart count of pod container-probe-8005/liveness-7dc1d54d-82ec-4a38-a856-6de04912f9d8 is now 5 (2m18.827004308s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:01:09.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-8005" for this suite.

• [SLOW TEST:147.213 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":278,"completed":116,"skipped":1981,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:01:10.024: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name projected-secret-test-80da0e02-8740-4ff2-a521-69fc04513e52
STEP: Creating a pod to test consume secrets
Jan 25 22:01:10.147: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-7521b1e0-1006-459e-a212-7e39f2815a9d" in namespace "projected-5816" to be "success or failure"
Jan 25 22:01:10.181: INFO: Pod "pod-projected-secrets-7521b1e0-1006-459e-a212-7e39f2815a9d": Phase="Pending", Reason="", readiness=false. Elapsed: 33.985769ms
Jan 25 22:01:12.191: INFO: Pod "pod-projected-secrets-7521b1e0-1006-459e-a212-7e39f2815a9d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04440427s
Jan 25 22:01:14.197: INFO: Pod "pod-projected-secrets-7521b1e0-1006-459e-a212-7e39f2815a9d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049925309s
Jan 25 22:01:16.204: INFO: Pod "pod-projected-secrets-7521b1e0-1006-459e-a212-7e39f2815a9d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.056918674s
Jan 25 22:01:18.214: INFO: Pod "pod-projected-secrets-7521b1e0-1006-459e-a212-7e39f2815a9d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.067014259s
Jan 25 22:01:20.222: INFO: Pod "pod-projected-secrets-7521b1e0-1006-459e-a212-7e39f2815a9d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.075418573s
Jan 25 22:01:22.230: INFO: Pod "pod-projected-secrets-7521b1e0-1006-459e-a212-7e39f2815a9d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.083114605s
STEP: Saw pod success
Jan 25 22:01:22.230: INFO: Pod "pod-projected-secrets-7521b1e0-1006-459e-a212-7e39f2815a9d" satisfied condition "success or failure"
Jan 25 22:01:22.233: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-7521b1e0-1006-459e-a212-7e39f2815a9d container projected-secret-volume-test: 
STEP: delete the pod
Jan 25 22:01:22.582: INFO: Waiting for pod pod-projected-secrets-7521b1e0-1006-459e-a212-7e39f2815a9d to disappear
Jan 25 22:01:22.587: INFO: Pod pod-projected-secrets-7521b1e0-1006-459e-a212-7e39f2815a9d no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:01:22.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5816" for this suite.

• [SLOW TEST:12.578 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":117,"skipped":1989,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a validating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:01:22.602: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 25 22:01:23.296: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 25 22:01:25.311: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715586483, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715586483, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715586483, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715586483, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 22:01:27.321: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715586483, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715586483, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715586483, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715586483, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 22:01:29.341: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715586483, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715586483, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715586483, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715586483, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 22:01:31.318: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715586483, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715586483, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715586483, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715586483, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 25 22:01:34.456: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] patching/updating a validating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a validating webhook configuration
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Updating a validating webhook configuration's rules to not include the create operation
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Patching a validating webhook configuration's rules to include the create operation
STEP: Creating a configMap that does not comply to the validation webhook rules
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:01:34.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-1618" for this suite.
STEP: Destroying namespace "webhook-1618-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:12.270 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a validating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":278,"completed":118,"skipped":1997,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with terminating scopes. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:01:34.874: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify ResourceQuota with terminating scopes. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a ResourceQuota with terminating scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a ResourceQuota with not terminating scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a long running pod
STEP: Ensuring resource quota with not terminating scope captures the pod usage
STEP: Ensuring resource quota with terminating scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
STEP: Creating a terminating pod
STEP: Ensuring resource quota with terminating scope captures the pod usage
STEP: Ensuring resource quota with not terminating scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:01:51.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-9332" for this suite.

• [SLOW TEST:16.520 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with terminating scopes. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":278,"completed":119,"skipped":2008,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:01:51.395: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Jan 25 22:01:51.662: INFO: Waiting up to 5m0s for pod "downward-api-d3c3ac76-4581-4ec0-9851-5c5cbcc3c4fb" in namespace "downward-api-8529" to be "success or failure"
Jan 25 22:01:51.727: INFO: Pod "downward-api-d3c3ac76-4581-4ec0-9851-5c5cbcc3c4fb": Phase="Pending", Reason="", readiness=false. Elapsed: 64.73725ms
Jan 25 22:01:53.735: INFO: Pod "downward-api-d3c3ac76-4581-4ec0-9851-5c5cbcc3c4fb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072489544s
Jan 25 22:01:55.742: INFO: Pod "downward-api-d3c3ac76-4581-4ec0-9851-5c5cbcc3c4fb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.079489003s
Jan 25 22:01:57.757: INFO: Pod "downward-api-d3c3ac76-4581-4ec0-9851-5c5cbcc3c4fb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.094904824s
Jan 25 22:01:59.766: INFO: Pod "downward-api-d3c3ac76-4581-4ec0-9851-5c5cbcc3c4fb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.103337188s
Jan 25 22:02:01.779: INFO: Pod "downward-api-d3c3ac76-4581-4ec0-9851-5c5cbcc3c4fb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.116395089s
STEP: Saw pod success
Jan 25 22:02:01.779: INFO: Pod "downward-api-d3c3ac76-4581-4ec0-9851-5c5cbcc3c4fb" satisfied condition "success or failure"
Jan 25 22:02:01.786: INFO: Trying to get logs from node jerma-node pod downward-api-d3c3ac76-4581-4ec0-9851-5c5cbcc3c4fb container dapi-container: 
STEP: delete the pod
Jan 25 22:02:01.844: INFO: Waiting for pod downward-api-d3c3ac76-4581-4ec0-9851-5c5cbcc3c4fb to disappear
Jan 25 22:02:01.853: INFO: Pod downward-api-d3c3ac76-4581-4ec0-9851-5c5cbcc3c4fb no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:02:01.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8529" for this suite.

• [SLOW TEST:10.543 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":278,"completed":120,"skipped":2012,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:02:01.939: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jan 25 22:02:02.204: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9e45e691-501a-4891-a2f2-eacd80f0393d" in namespace "projected-1735" to be "success or failure"
Jan 25 22:02:02.215: INFO: Pod "downwardapi-volume-9e45e691-501a-4891-a2f2-eacd80f0393d": Phase="Pending", Reason="", readiness=false. Elapsed: 11.098171ms
Jan 25 22:02:05.143: INFO: Pod "downwardapi-volume-9e45e691-501a-4891-a2f2-eacd80f0393d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.939189695s
Jan 25 22:02:07.164: INFO: Pod "downwardapi-volume-9e45e691-501a-4891-a2f2-eacd80f0393d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.960484956s
Jan 25 22:02:09.176: INFO: Pod "downwardapi-volume-9e45e691-501a-4891-a2f2-eacd80f0393d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.972506699s
Jan 25 22:02:11.183: INFO: Pod "downwardapi-volume-9e45e691-501a-4891-a2f2-eacd80f0393d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.979130878s
STEP: Saw pod success
Jan 25 22:02:11.183: INFO: Pod "downwardapi-volume-9e45e691-501a-4891-a2f2-eacd80f0393d" satisfied condition "success or failure"
Jan 25 22:02:11.187: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-9e45e691-501a-4891-a2f2-eacd80f0393d container client-container: 
STEP: delete the pod
Jan 25 22:02:11.257: INFO: Waiting for pod downwardapi-volume-9e45e691-501a-4891-a2f2-eacd80f0393d to disappear
Jan 25 22:02:11.262: INFO: Pod downwardapi-volume-9e45e691-501a-4891-a2f2-eacd80f0393d no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:02:11.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1735" for this suite.

• [SLOW TEST:9.335 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":121,"skipped":2033,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:02:11.275: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-c3e7dc61-afa0-4971-a8d5-15340905e1cb
STEP: Creating a pod to test consume configMaps
Jan 25 22:02:11.641: INFO: Waiting up to 5m0s for pod "pod-configmaps-0739dcbf-596c-4bb5-92c5-5043c208a28d" in namespace "configmap-3370" to be "success or failure"
Jan 25 22:02:11.653: INFO: Pod "pod-configmaps-0739dcbf-596c-4bb5-92c5-5043c208a28d": Phase="Pending", Reason="", readiness=false. Elapsed: 12.362534ms
Jan 25 22:02:13.662: INFO: Pod "pod-configmaps-0739dcbf-596c-4bb5-92c5-5043c208a28d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02135805s
Jan 25 22:02:15.674: INFO: Pod "pod-configmaps-0739dcbf-596c-4bb5-92c5-5043c208a28d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033069346s
Jan 25 22:02:17.680: INFO: Pod "pod-configmaps-0739dcbf-596c-4bb5-92c5-5043c208a28d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.038866571s
Jan 25 22:02:19.687: INFO: Pod "pod-configmaps-0739dcbf-596c-4bb5-92c5-5043c208a28d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.04621331s
Jan 25 22:02:21.704: INFO: Pod "pod-configmaps-0739dcbf-596c-4bb5-92c5-5043c208a28d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.063451471s
STEP: Saw pod success
Jan 25 22:02:21.704: INFO: Pod "pod-configmaps-0739dcbf-596c-4bb5-92c5-5043c208a28d" satisfied condition "success or failure"
Jan 25 22:02:21.712: INFO: Trying to get logs from node jerma-node pod pod-configmaps-0739dcbf-596c-4bb5-92c5-5043c208a28d container configmap-volume-test: 
STEP: delete the pod
Jan 25 22:02:21.867: INFO: Waiting for pod pod-configmaps-0739dcbf-596c-4bb5-92c5-5043c208a28d to disappear
Jan 25 22:02:21.876: INFO: Pod pod-configmaps-0739dcbf-596c-4bb5-92c5-5043c208a28d no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:02:21.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3370" for this suite.

• [SLOW TEST:10.624 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":122,"skipped":2039,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:02:21.901: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod pod-subpath-test-configmap-w66d
STEP: Creating a pod to test atomic-volume-subpath
Jan 25 22:02:22.191: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-w66d" in namespace "subpath-2290" to be "success or failure"
Jan 25 22:02:22.201: INFO: Pod "pod-subpath-test-configmap-w66d": Phase="Pending", Reason="", readiness=false. Elapsed: 9.866938ms
Jan 25 22:02:24.206: INFO: Pod "pod-subpath-test-configmap-w66d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015232805s
Jan 25 22:02:26.212: INFO: Pod "pod-subpath-test-configmap-w66d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.020611866s
Jan 25 22:02:28.218: INFO: Pod "pod-subpath-test-configmap-w66d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.027492917s
Jan 25 22:02:30.227: INFO: Pod "pod-subpath-test-configmap-w66d": Phase="Running", Reason="", readiness=true. Elapsed: 8.036084759s
Jan 25 22:02:32.239: INFO: Pod "pod-subpath-test-configmap-w66d": Phase="Running", Reason="", readiness=true. Elapsed: 10.04838579s
Jan 25 22:02:34.247: INFO: Pod "pod-subpath-test-configmap-w66d": Phase="Running", Reason="", readiness=true. Elapsed: 12.055699744s
Jan 25 22:02:36.261: INFO: Pod "pod-subpath-test-configmap-w66d": Phase="Running", Reason="", readiness=true. Elapsed: 14.070377079s
Jan 25 22:02:38.268: INFO: Pod "pod-subpath-test-configmap-w66d": Phase="Running", Reason="", readiness=true. Elapsed: 16.077182179s
Jan 25 22:02:40.276: INFO: Pod "pod-subpath-test-configmap-w66d": Phase="Running", Reason="", readiness=true. Elapsed: 18.084957034s
Jan 25 22:02:42.288: INFO: Pod "pod-subpath-test-configmap-w66d": Phase="Running", Reason="", readiness=true. Elapsed: 20.096631478s
Jan 25 22:02:44.293: INFO: Pod "pod-subpath-test-configmap-w66d": Phase="Running", Reason="", readiness=true. Elapsed: 22.101814261s
Jan 25 22:02:46.300: INFO: Pod "pod-subpath-test-configmap-w66d": Phase="Running", Reason="", readiness=true. Elapsed: 24.108685142s
Jan 25 22:02:48.306: INFO: Pod "pod-subpath-test-configmap-w66d": Phase="Running", Reason="", readiness=true. Elapsed: 26.114579383s
Jan 25 22:02:50.313: INFO: Pod "pod-subpath-test-configmap-w66d": Phase="Running", Reason="", readiness=true. Elapsed: 28.12169041s
Jan 25 22:02:52.321: INFO: Pod "pod-subpath-test-configmap-w66d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.129849056s
STEP: Saw pod success
Jan 25 22:02:52.321: INFO: Pod "pod-subpath-test-configmap-w66d" satisfied condition "success or failure"
Jan 25 22:02:52.325: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-configmap-w66d container test-container-subpath-configmap-w66d: 
STEP: delete the pod
Jan 25 22:02:52.414: INFO: Waiting for pod pod-subpath-test-configmap-w66d to disappear
Jan 25 22:02:52.540: INFO: Pod pod-subpath-test-configmap-w66d no longer exists
STEP: Deleting pod pod-subpath-test-configmap-w66d
Jan 25 22:02:52.540: INFO: Deleting pod "pod-subpath-test-configmap-w66d" in namespace "subpath-2290"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:02:52.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-2290" for this suite.

• [SLOW TEST:30.666 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":278,"completed":123,"skipped":2072,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
S
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:02:52.567: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ReplicaSet
STEP: Ensuring resource quota status captures replicaset creation
STEP: Deleting a ReplicaSet
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:03:03.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-1005" for this suite.

• [SLOW TEST:11.372 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":278,"completed":124,"skipped":2073,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:03:03.940: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 25 22:03:04.037: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:03:12.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-314" for this suite.

• [SLOW TEST:8.195 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":278,"completed":125,"skipped":2073,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:03:12.137: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jan 25 22:03:21.441: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:03:21.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-2954" for this suite.

• [SLOW TEST:9.372 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":126,"skipped":2079,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:03:21.510: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod liveness-5a2b6b78-ad87-4584-86eb-eefe00ce6546 in namespace container-probe-291
Jan 25 22:03:31.711: INFO: Started pod liveness-5a2b6b78-ad87-4584-86eb-eefe00ce6546 in namespace container-probe-291
STEP: checking the pod's current state and verifying that restartCount is present
Jan 25 22:03:31.715: INFO: Initial restart count of pod liveness-5a2b6b78-ad87-4584-86eb-eefe00ce6546 is 0
Jan 25 22:03:56.431: INFO: Restart count of pod container-probe-291/liveness-5a2b6b78-ad87-4584-86eb-eefe00ce6546 is now 1 (24.71611345s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:03:56.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-291" for this suite.

• [SLOW TEST:35.140 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":127,"skipped":2079,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:03:56.656: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:04:07.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-9427" for this suite.

• [SLOW TEST:10.433 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":278,"completed":128,"skipped":2116,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:04:07.091: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-883d5804-b9df-4860-bffa-9699ae333b8a
STEP: Creating a pod to test consume configMaps
Jan 25 22:04:07.214: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-46c8fe3d-6a63-4d9b-80c0-aa8fdc693b72" in namespace "projected-7396" to be "success or failure"
Jan 25 22:04:07.232: INFO: Pod "pod-projected-configmaps-46c8fe3d-6a63-4d9b-80c0-aa8fdc693b72": Phase="Pending", Reason="", readiness=false. Elapsed: 17.331295ms
Jan 25 22:04:09.243: INFO: Pod "pod-projected-configmaps-46c8fe3d-6a63-4d9b-80c0-aa8fdc693b72": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028789339s
Jan 25 22:04:11.252: INFO: Pod "pod-projected-configmaps-46c8fe3d-6a63-4d9b-80c0-aa8fdc693b72": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038001329s
Jan 25 22:04:13.261: INFO: Pod "pod-projected-configmaps-46c8fe3d-6a63-4d9b-80c0-aa8fdc693b72": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046875453s
Jan 25 22:04:15.274: INFO: Pod "pod-projected-configmaps-46c8fe3d-6a63-4d9b-80c0-aa8fdc693b72": Phase="Pending", Reason="", readiness=false. Elapsed: 8.059348038s
Jan 25 22:04:17.336: INFO: Pod "pod-projected-configmaps-46c8fe3d-6a63-4d9b-80c0-aa8fdc693b72": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.121698414s
STEP: Saw pod success
Jan 25 22:04:17.337: INFO: Pod "pod-projected-configmaps-46c8fe3d-6a63-4d9b-80c0-aa8fdc693b72" satisfied condition "success or failure"
Jan 25 22:04:17.345: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-46c8fe3d-6a63-4d9b-80c0-aa8fdc693b72 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 25 22:04:17.470: INFO: Waiting for pod pod-projected-configmaps-46c8fe3d-6a63-4d9b-80c0-aa8fdc693b72 to disappear
Jan 25 22:04:17.518: INFO: Pod pod-projected-configmaps-46c8fe3d-6a63-4d9b-80c0-aa8fdc693b72 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:04:17.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7396" for this suite.

• [SLOW TEST:10.456 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":129,"skipped":2139,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:04:17.549: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
Jan 25 22:04:17.656: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:04:32.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-7481" for this suite.

• [SLOW TEST:14.799 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":278,"completed":130,"skipped":2147,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:04:32.349: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating Pod
STEP: Waiting for the pod running
STEP: Geting the pod
STEP: Reading file content from the nginx-container
Jan 25 22:04:44.573: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-2863 PodName:pod-sharedvolume-a5446898-524a-4c7f-ad4c-4e574ec42d48 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 25 22:04:44.573: INFO: >>> kubeConfig: /root/.kube/config
I0125 22:04:44.612696       8 log.go:172] (0xc002b04420) (0xc000a1fe00) Create stream
I0125 22:04:44.612768       8 log.go:172] (0xc002b04420) (0xc000a1fe00) Stream added, broadcasting: 1
I0125 22:04:44.616197       8 log.go:172] (0xc002b04420) Reply frame received for 1
I0125 22:04:44.616230       8 log.go:172] (0xc002b04420) (0xc001f7d400) Create stream
I0125 22:04:44.616239       8 log.go:172] (0xc002b04420) (0xc001f7d400) Stream added, broadcasting: 3
I0125 22:04:44.618836       8 log.go:172] (0xc002b04420) Reply frame received for 3
I0125 22:04:44.618864       8 log.go:172] (0xc002b04420) (0xc0010c0500) Create stream
I0125 22:04:44.618874       8 log.go:172] (0xc002b04420) (0xc0010c0500) Stream added, broadcasting: 5
I0125 22:04:44.620504       8 log.go:172] (0xc002b04420) Reply frame received for 5
I0125 22:04:44.729187       8 log.go:172] (0xc002b04420) Data frame received for 3
I0125 22:04:44.729266       8 log.go:172] (0xc001f7d400) (3) Data frame handling
I0125 22:04:44.729299       8 log.go:172] (0xc001f7d400) (3) Data frame sent
I0125 22:04:44.801855       8 log.go:172] (0xc002b04420) (0xc001f7d400) Stream removed, broadcasting: 3
I0125 22:04:44.802169       8 log.go:172] (0xc002b04420) Data frame received for 1
I0125 22:04:44.802196       8 log.go:172] (0xc000a1fe00) (1) Data frame handling
I0125 22:04:44.802224       8 log.go:172] (0xc000a1fe00) (1) Data frame sent
I0125 22:04:44.802246       8 log.go:172] (0xc002b04420) (0xc000a1fe00) Stream removed, broadcasting: 1
I0125 22:04:44.803064       8 log.go:172] (0xc002b04420) (0xc0010c0500) Stream removed, broadcasting: 5
I0125 22:04:44.803277       8 log.go:172] (0xc002b04420) Go away received
I0125 22:04:44.803407       8 log.go:172] (0xc002b04420) (0xc000a1fe00) Stream removed, broadcasting: 1
I0125 22:04:44.803437       8 log.go:172] (0xc002b04420) (0xc001f7d400) Stream removed, broadcasting: 3
I0125 22:04:44.803450       8 log.go:172] (0xc002b04420) (0xc0010c0500) Stream removed, broadcasting: 5
Jan 25 22:04:44.803: INFO: Exec stderr: ""
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:04:44.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2863" for this suite.

• [SLOW TEST:12.472 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":278,"completed":131,"skipped":2192,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:04:44.824: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name secret-emptykey-test-17af4e85-fff1-4738-9f37-1612746057c0
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:04:44.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-6529" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":278,"completed":132,"skipped":2221,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from NodePort to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:04:44.939: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to change the type from NodePort to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a service nodeport-service with the type=NodePort in namespace services-7350
STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service
STEP: creating service externalsvc in namespace services-7350
STEP: creating replication controller externalsvc in namespace services-7350
I0125 22:04:45.436050       8 runners.go:189] Created replication controller with name: externalsvc, namespace: services-7350, replica count: 2
I0125 22:04:48.487948       8 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0125 22:04:51.488613       8 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0125 22:04:54.489558       8 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0125 22:04:57.490163       8 runners.go:189] externalsvc Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0125 22:05:00.491184       8 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: changing the NodePort service to type=ExternalName
Jan 25 22:05:00.577: INFO: Creating new exec pod
Jan 25 22:05:08.649: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7350 execpodzjfcj -- /bin/sh -x -c nslookup nodeport-service'
Jan 25 22:05:10.984: INFO: stderr: "I0125 22:05:10.790811    1132 log.go:172] (0xc0008aeb00) (0xc000632780) Create stream\nI0125 22:05:10.790909    1132 log.go:172] (0xc0008aeb00) (0xc000632780) Stream added, broadcasting: 1\nI0125 22:05:10.797589    1132 log.go:172] (0xc0008aeb00) Reply frame received for 1\nI0125 22:05:10.797748    1132 log.go:172] (0xc0008aeb00) (0xc000209540) Create stream\nI0125 22:05:10.797768    1132 log.go:172] (0xc0008aeb00) (0xc000209540) Stream added, broadcasting: 3\nI0125 22:05:10.803749    1132 log.go:172] (0xc0008aeb00) Reply frame received for 3\nI0125 22:05:10.803780    1132 log.go:172] (0xc0008aeb00) (0xc00089c0a0) Create stream\nI0125 22:05:10.803797    1132 log.go:172] (0xc0008aeb00) (0xc00089c0a0) Stream added, broadcasting: 5\nI0125 22:05:10.805982    1132 log.go:172] (0xc0008aeb00) Reply frame received for 5\nI0125 22:05:10.878286    1132 log.go:172] (0xc0008aeb00) Data frame received for 5\nI0125 22:05:10.878797    1132 log.go:172] (0xc00089c0a0) (5) Data frame handling\nI0125 22:05:10.878894    1132 log.go:172] (0xc00089c0a0) (5) Data frame sent\n+ nslookup nodeport-service\nI0125 22:05:10.894799    1132 log.go:172] (0xc0008aeb00) Data frame received for 3\nI0125 22:05:10.894837    1132 log.go:172] (0xc000209540) (3) Data frame handling\nI0125 22:05:10.894865    1132 log.go:172] (0xc000209540) (3) Data frame sent\nI0125 22:05:10.895712    1132 log.go:172] (0xc0008aeb00) Data frame received for 3\nI0125 22:05:10.895726    1132 log.go:172] (0xc000209540) (3) Data frame handling\nI0125 22:05:10.895740    1132 log.go:172] (0xc000209540) (3) Data frame sent\nI0125 22:05:10.973082    1132 log.go:172] (0xc0008aeb00) (0xc00089c0a0) Stream removed, broadcasting: 5\nI0125 22:05:10.973212    1132 log.go:172] (0xc0008aeb00) Data frame received for 1\nI0125 22:05:10.973445    1132 log.go:172] (0xc0008aeb00) (0xc000209540) Stream removed, broadcasting: 3\nI0125 22:05:10.973495    1132 log.go:172] (0xc000632780) (1) Data frame handling\nI0125 22:05:10.973539    1132 log.go:172] (0xc000632780) (1) Data frame sent\nI0125 22:05:10.973552    1132 log.go:172] (0xc0008aeb00) (0xc000632780) Stream removed, broadcasting: 1\nI0125 22:05:10.973571    1132 log.go:172] (0xc0008aeb00) Go away received\nI0125 22:05:10.974832    1132 log.go:172] (0xc0008aeb00) (0xc000632780) Stream removed, broadcasting: 1\nI0125 22:05:10.974845    1132 log.go:172] (0xc0008aeb00) (0xc000209540) Stream removed, broadcasting: 3\nI0125 22:05:10.974854    1132 log.go:172] (0xc0008aeb00) (0xc00089c0a0) Stream removed, broadcasting: 5\n"
Jan 25 22:05:10.984: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-7350.svc.cluster.local\tcanonical name = externalsvc.services-7350.svc.cluster.local.\nName:\texternalsvc.services-7350.svc.cluster.local\nAddress: 10.96.35.216\n\n"
STEP: deleting ReplicationController externalsvc in namespace services-7350, will wait for the garbage collector to delete the pods
Jan 25 22:05:11.046: INFO: Deleting ReplicationController externalsvc took: 5.499883ms
Jan 25 22:05:11.447: INFO: Terminating ReplicationController externalsvc pods took: 401.031758ms
Jan 25 22:05:22.420: INFO: Cleaning up the NodePort to ExternalName test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:05:22.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-7350" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:37.527 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from NodePort to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":278,"completed":133,"skipped":2259,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:05:22.467: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jan 25 22:05:22.571: INFO: Waiting up to 5m0s for pod "downwardapi-volume-beebfe6b-2aee-472b-b2e6-0d6cbaa8f6b2" in namespace "projected-7826" to be "success or failure"
Jan 25 22:05:22.578: INFO: Pod "downwardapi-volume-beebfe6b-2aee-472b-b2e6-0d6cbaa8f6b2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.246898ms
Jan 25 22:05:24.587: INFO: Pod "downwardapi-volume-beebfe6b-2aee-472b-b2e6-0d6cbaa8f6b2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015512515s
Jan 25 22:05:26.597: INFO: Pod "downwardapi-volume-beebfe6b-2aee-472b-b2e6-0d6cbaa8f6b2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024949414s
Jan 25 22:05:28.603: INFO: Pod "downwardapi-volume-beebfe6b-2aee-472b-b2e6-0d6cbaa8f6b2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.031083378s
Jan 25 22:05:30.611: INFO: Pod "downwardapi-volume-beebfe6b-2aee-472b-b2e6-0d6cbaa8f6b2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.038629654s
Jan 25 22:05:32.620: INFO: Pod "downwardapi-volume-beebfe6b-2aee-472b-b2e6-0d6cbaa8f6b2": Phase="Pending", Reason="", readiness=false. Elapsed: 10.047647977s
Jan 25 22:05:34.628: INFO: Pod "downwardapi-volume-beebfe6b-2aee-472b-b2e6-0d6cbaa8f6b2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.055756756s
STEP: Saw pod success
Jan 25 22:05:34.628: INFO: Pod "downwardapi-volume-beebfe6b-2aee-472b-b2e6-0d6cbaa8f6b2" satisfied condition "success or failure"
Jan 25 22:05:34.632: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-beebfe6b-2aee-472b-b2e6-0d6cbaa8f6b2 container client-container: 
STEP: delete the pod
Jan 25 22:05:34.796: INFO: Waiting for pod downwardapi-volume-beebfe6b-2aee-472b-b2e6-0d6cbaa8f6b2 to disappear
Jan 25 22:05:34.809: INFO: Pod downwardapi-volume-beebfe6b-2aee-472b-b2e6-0d6cbaa8f6b2 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:05:34.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7826" for this suite.

• [SLOW TEST:12.352 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":278,"completed":134,"skipped":2263,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:05:34.819: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 25 22:05:35.627: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 25 22:05:37.644: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715586735, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715586735, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715586735, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715586735, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 22:05:39.652: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715586735, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715586735, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715586735, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715586735, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 22:05:41.652: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715586735, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715586735, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715586735, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715586735, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 25 22:05:44.680: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 25 22:05:44.694: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-815-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource that should be mutated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:05:45.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3288" for this suite.
STEP: Destroying namespace "webhook-3288-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:11.230 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":278,"completed":135,"skipped":2265,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:05:46.051: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-map-05416744-3f9a-4fe4-bf48-7ec0eeff7b58
STEP: Creating a pod to test consume configMaps
Jan 25 22:05:46.319: INFO: Waiting up to 5m0s for pod "pod-configmaps-1ffd1775-377a-4590-8f5c-1f7c69d8caeb" in namespace "configmap-777" to be "success or failure"
Jan 25 22:05:46.348: INFO: Pod "pod-configmaps-1ffd1775-377a-4590-8f5c-1f7c69d8caeb": Phase="Pending", Reason="", readiness=false. Elapsed: 28.897014ms
Jan 25 22:05:48.464: INFO: Pod "pod-configmaps-1ffd1775-377a-4590-8f5c-1f7c69d8caeb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.14546834s
Jan 25 22:05:50.474: INFO: Pod "pod-configmaps-1ffd1775-377a-4590-8f5c-1f7c69d8caeb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.154791065s
Jan 25 22:05:52.484: INFO: Pod "pod-configmaps-1ffd1775-377a-4590-8f5c-1f7c69d8caeb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.164931284s
Jan 25 22:05:54.493: INFO: Pod "pod-configmaps-1ffd1775-377a-4590-8f5c-1f7c69d8caeb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.174524531s
Jan 25 22:05:56.507: INFO: Pod "pod-configmaps-1ffd1775-377a-4590-8f5c-1f7c69d8caeb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.187927798s
STEP: Saw pod success
Jan 25 22:05:56.507: INFO: Pod "pod-configmaps-1ffd1775-377a-4590-8f5c-1f7c69d8caeb" satisfied condition "success or failure"
Jan 25 22:05:56.513: INFO: Trying to get logs from node jerma-node pod pod-configmaps-1ffd1775-377a-4590-8f5c-1f7c69d8caeb container configmap-volume-test: 
STEP: delete the pod
Jan 25 22:05:56.569: INFO: Waiting for pod pod-configmaps-1ffd1775-377a-4590-8f5c-1f7c69d8caeb to disappear
Jan 25 22:05:56.574: INFO: Pod pod-configmaps-1ffd1775-377a-4590-8f5c-1f7c69d8caeb no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:05:56.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-777" for this suite.

• [SLOW TEST:10.551 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":136,"skipped":2274,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:05:56.603: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0777 on node default medium
Jan 25 22:05:56.894: INFO: Waiting up to 5m0s for pod "pod-57bfdf0a-25d4-4192-b68f-ce23c02e36e9" in namespace "emptydir-3180" to be "success or failure"
Jan 25 22:05:56.903: INFO: Pod "pod-57bfdf0a-25d4-4192-b68f-ce23c02e36e9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.504677ms
Jan 25 22:05:58.927: INFO: Pod "pod-57bfdf0a-25d4-4192-b68f-ce23c02e36e9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032389836s
Jan 25 22:06:01.004: INFO: Pod "pod-57bfdf0a-25d4-4192-b68f-ce23c02e36e9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.109886368s
Jan 25 22:06:03.080: INFO: Pod "pod-57bfdf0a-25d4-4192-b68f-ce23c02e36e9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.185524919s
Jan 25 22:06:05.086: INFO: Pod "pod-57bfdf0a-25d4-4192-b68f-ce23c02e36e9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.191538942s
Jan 25 22:06:07.092: INFO: Pod "pod-57bfdf0a-25d4-4192-b68f-ce23c02e36e9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.197703849s
STEP: Saw pod success
Jan 25 22:06:07.092: INFO: Pod "pod-57bfdf0a-25d4-4192-b68f-ce23c02e36e9" satisfied condition "success or failure"
Jan 25 22:06:07.096: INFO: Trying to get logs from node jerma-node pod pod-57bfdf0a-25d4-4192-b68f-ce23c02e36e9 container test-container: 
STEP: delete the pod
Jan 25 22:06:07.134: INFO: Waiting for pod pod-57bfdf0a-25d4-4192-b68f-ce23c02e36e9 to disappear
Jan 25 22:06:07.198: INFO: Pod pod-57bfdf0a-25d4-4192-b68f-ce23c02e36e9 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:06:07.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3180" for this suite.

• [SLOW TEST:10.622 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":137,"skipped":2279,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
S
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:06:07.227: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:06:57.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-8580" for this suite.

• [SLOW TEST:50.342 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":278,"completed":138,"skipped":2280,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing validating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:06:57.570: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 25 22:06:58.601: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 25 22:07:00.624: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715586818, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715586818, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715586818, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715586818, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 22:07:02.633: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715586818, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715586818, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715586818, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715586818, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 22:07:04.631: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715586818, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715586818, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715586818, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715586818, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 25 22:07:07.666: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] listing validating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Listing all of the created validation webhooks
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Deleting the collection of validation webhooks
STEP: Creating a configMap that does not comply to the validation webhook rules
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:07:08.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-6668" for this suite.
STEP: Destroying namespace "webhook-6668-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:10.845 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing validating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":278,"completed":139,"skipped":2287,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:07:08.416: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod busybox-4f3f21d4-6c12-4841-903a-e05205e50b5c in namespace container-probe-8460
Jan 25 22:07:18.741: INFO: Started pod busybox-4f3f21d4-6c12-4841-903a-e05205e50b5c in namespace container-probe-8460
STEP: checking the pod's current state and verifying that restartCount is present
Jan 25 22:07:18.748: INFO: Initial restart count of pod busybox-4f3f21d4-6c12-4841-903a-e05205e50b5c is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:11:20.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-8460" for this suite.

• [SLOW TEST:252.048 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":140,"skipped":2293,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a container with runAsUser 
  should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:11:20.469: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39
[It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 25 22:11:20.598: INFO: Waiting up to 5m0s for pod "busybox-user-65534-a8dd3287-f132-4584-8353-2470e0544757" in namespace "security-context-test-6170" to be "success or failure"
Jan 25 22:11:20.616: INFO: Pod "busybox-user-65534-a8dd3287-f132-4584-8353-2470e0544757": Phase="Pending", Reason="", readiness=false. Elapsed: 17.700607ms
Jan 25 22:11:22.626: INFO: Pod "busybox-user-65534-a8dd3287-f132-4584-8353-2470e0544757": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027852582s
Jan 25 22:11:24.632: INFO: Pod "busybox-user-65534-a8dd3287-f132-4584-8353-2470e0544757": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034253947s
Jan 25 22:11:26.648: INFO: Pod "busybox-user-65534-a8dd3287-f132-4584-8353-2470e0544757": Phase="Pending", Reason="", readiness=false. Elapsed: 6.04984368s
Jan 25 22:11:28.656: INFO: Pod "busybox-user-65534-a8dd3287-f132-4584-8353-2470e0544757": Phase="Pending", Reason="", readiness=false. Elapsed: 8.057392294s
Jan 25 22:11:30.666: INFO: Pod "busybox-user-65534-a8dd3287-f132-4584-8353-2470e0544757": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.067450155s
Jan 25 22:11:30.666: INFO: Pod "busybox-user-65534-a8dd3287-f132-4584-8353-2470e0544757" satisfied condition "success or failure"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:11:30.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-6170" for this suite.

• [SLOW TEST:10.216 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  When creating a container with runAsUser
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:43
    should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":141,"skipped":2361,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:11:30.688: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-241a2dc2-7792-4bcd-84aa-87ee2a7d5eb2
STEP: Creating a pod to test consume configMaps
Jan 25 22:11:30.966: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e91cab35-c951-4a0e-a96d-83eb9bdba27c" in namespace "projected-5857" to be "success or failure"
Jan 25 22:11:30.989: INFO: Pod "pod-projected-configmaps-e91cab35-c951-4a0e-a96d-83eb9bdba27c": Phase="Pending", Reason="", readiness=false. Elapsed: 22.095279ms
Jan 25 22:11:32.998: INFO: Pod "pod-projected-configmaps-e91cab35-c951-4a0e-a96d-83eb9bdba27c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030918083s
Jan 25 22:11:35.004: INFO: Pod "pod-projected-configmaps-e91cab35-c951-4a0e-a96d-83eb9bdba27c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037822352s
Jan 25 22:11:37.010: INFO: Pod "pod-projected-configmaps-e91cab35-c951-4a0e-a96d-83eb9bdba27c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.043533614s
Jan 25 22:11:39.015: INFO: Pod "pod-projected-configmaps-e91cab35-c951-4a0e-a96d-83eb9bdba27c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.047997907s
STEP: Saw pod success
Jan 25 22:11:39.015: INFO: Pod "pod-projected-configmaps-e91cab35-c951-4a0e-a96d-83eb9bdba27c" satisfied condition "success or failure"
Jan 25 22:11:39.018: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-e91cab35-c951-4a0e-a96d-83eb9bdba27c container projected-configmap-volume-test: 
STEP: delete the pod
Jan 25 22:11:39.138: INFO: Waiting for pod pod-projected-configmaps-e91cab35-c951-4a0e-a96d-83eb9bdba27c to disappear
Jan 25 22:11:39.153: INFO: Pod pod-projected-configmaps-e91cab35-c951-4a0e-a96d-83eb9bdba27c no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:11:39.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5857" for this suite.

• [SLOW TEST:8.507 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":142,"skipped":2397,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:11:39.196: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 25 22:11:39.379: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Jan 25 22:11:44.390: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jan 25 22:11:48.400: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Jan 25 22:11:48.473: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:{test-cleanup-deployment  deployment-5430 /apis/apps/v1/namespaces/deployment-5430/deployments/test-cleanup-deployment e774426e-a17d-4aac-8701-1b95685f006e 4337027 1 2020-01-25 22:11:48 +0000 UTC   map[name:cleanup-pod] map[] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc000eb6448  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},}

Jan 25 22:11:48.489: INFO: New ReplicaSet "test-cleanup-deployment-55ffc6b7b6" of Deployment "test-cleanup-deployment":
&ReplicaSet{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6  deployment-5430 /apis/apps/v1/namespaces/deployment-5430/replicasets/test-cleanup-deployment-55ffc6b7b6 b4d28288-28d7-4743-a26f-427a06a4113f 4337029 1 2020-01-25 22:11:48 +0000 UTC   map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment e774426e-a17d-4aac-8701-1b95685f006e 0xc000eb6d57 0xc000eb6d58}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55ffc6b7b6,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc000eb6dc8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Jan 25 22:11:48.489: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment":
Jan 25 22:11:48.490: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller  deployment-5430 /apis/apps/v1/namespaces/deployment-5430/replicasets/test-cleanup-controller f97608e0-1e6b-4719-b686-eaff0da46d7d 4337028 1 2020-01-25 22:11:39 +0000 UTC   map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment e774426e-a17d-4aac-8701-1b95685f006e 0xc000eb6af7 0xc000eb6af8}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc000eb6b58  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Jan 25 22:11:48.521: INFO: Pod "test-cleanup-controller-8vfzn" is available:
&Pod{ObjectMeta:{test-cleanup-controller-8vfzn test-cleanup-controller- deployment-5430 /api/v1/namespaces/deployment-5430/pods/test-cleanup-controller-8vfzn 193c6146-d37a-4499-8dd0-b78cd751a2e8 4337021 0 2020-01-25 22:11:39 +0000 UTC   map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller f97608e0-1e6b-4719-b686-eaff0da46d7d 0xc0053e16b7 0xc0053e16b8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-p6srj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-p6srj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-p6srj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 22:11:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 22:11:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 22:11:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 22:11:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.1,StartTime:2020-01-25 22:11:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-25 22:11:46 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://222d3bdc7ec6493cfc6568174028d4486085acba0eaf5cc2b8bd06cd557ec9ae,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.1,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 25 22:11:48.522: INFO: Pod "test-cleanup-deployment-55ffc6b7b6-9cj2f" is not available:
&Pod{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6-9cj2f test-cleanup-deployment-55ffc6b7b6- deployment-5430 /api/v1/namespaces/deployment-5430/pods/test-cleanup-deployment-55ffc6b7b6-9cj2f 81eed1ab-b595-423f-878d-d3f6e44cc7f4 4337035 0 2020-01-25 22:11:48 +0000 UTC   map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-55ffc6b7b6 b4d28288-28d7-4743-a26f-427a06a4113f 0xc0053e1847 0xc0053e1848}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-p6srj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-p6srj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-p6srj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 22:11:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:11:48.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-5430" for this suite.

• [SLOW TEST:9.452 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":278,"completed":143,"skipped":2408,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:11:48.650: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 25 22:11:48.708: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3082'
Jan 25 22:11:49.122: INFO: stderr: ""
Jan 25 22:11:49.123: INFO: stdout: "replicationcontroller/agnhost-master created\n"
Jan 25 22:11:49.123: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3082'
Jan 25 22:11:49.541: INFO: stderr: ""
Jan 25 22:11:49.541: INFO: stdout: "service/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Jan 25 22:11:50.551: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 25 22:11:50.551: INFO: Found 0 / 1
Jan 25 22:11:51.548: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 25 22:11:51.548: INFO: Found 0 / 1
Jan 25 22:11:52.585: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 25 22:11:52.585: INFO: Found 0 / 1
Jan 25 22:11:53.549: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 25 22:11:53.549: INFO: Found 0 / 1
Jan 25 22:11:54.673: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 25 22:11:54.673: INFO: Found 0 / 1
Jan 25 22:11:55.547: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 25 22:11:55.547: INFO: Found 0 / 1
Jan 25 22:11:56.556: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 25 22:11:56.557: INFO: Found 0 / 1
Jan 25 22:11:57.548: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 25 22:11:57.548: INFO: Found 0 / 1
Jan 25 22:11:58.552: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 25 22:11:58.553: INFO: Found 0 / 1
Jan 25 22:11:59.557: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 25 22:11:59.557: INFO: Found 0 / 1
Jan 25 22:12:00.559: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 25 22:12:00.560: INFO: Found 0 / 1
Jan 25 22:12:01.552: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 25 22:12:01.552: INFO: Found 1 / 1
Jan 25 22:12:01.552: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jan 25 22:12:01.556: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 25 22:12:01.556: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jan 25 22:12:01.557: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod agnhost-master-wqqtf --namespace=kubectl-3082'
Jan 25 22:12:01.796: INFO: stderr: ""
Jan 25 22:12:01.796: INFO: stdout: "Name:         agnhost-master-wqqtf\nNamespace:    kubectl-3082\nPriority:     0\nNode:         jerma-node/10.96.2.250\nStart Time:   Sat, 25 Jan 2020 22:11:50 +0000\nLabels:       app=agnhost\n              role=master\nAnnotations:  \nStatus:       Running\nIP:           10.44.0.3\nIPs:\n  IP:           10.44.0.3\nControlled By:  ReplicationController/agnhost-master\nContainers:\n  agnhost-master:\n    Container ID:   docker://d592e1acc5c19e651b5f997f8cf7ca2c48bf7800b094bb1a1047903f6e722807\n    Image:          gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n    Image ID:       docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Sat, 25 Jan 2020 22:11:59 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-z469k (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-z469k:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-z469k\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age        From                 Message\n  ----    ------     ----       ----                 -------\n  Normal  Scheduled    default-scheduler    Successfully assigned kubectl-3082/agnhost-master-wqqtf to jerma-node\n  Normal  Pulled     8s         kubelet, jerma-node  Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\n  Normal  Created    4s         kubelet, jerma-node  Created container agnhost-master\n  Normal  Started    2s         kubelet, jerma-node  Started container agnhost-master\n"
Jan 25 22:12:01.797: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-3082'
Jan 25 22:12:01.956: INFO: stderr: ""
Jan 25 22:12:01.956: INFO: stdout: "Name:         agnhost-master\nNamespace:    kubectl-3082\nSelector:     app=agnhost,role=master\nLabels:       app=agnhost\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=agnhost\n           role=master\n  Containers:\n   agnhost-master:\n    Image:        gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  12s   replication-controller  Created pod: agnhost-master-wqqtf\n"
Jan 25 22:12:01.957: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-3082'
Jan 25 22:12:02.134: INFO: stderr: ""
Jan 25 22:12:02.135: INFO: stdout: "Name:              agnhost-master\nNamespace:         kubectl-3082\nLabels:            app=agnhost\n                   role=master\nAnnotations:       \nSelector:          app=agnhost,role=master\nType:              ClusterIP\nIP:                10.96.50.251\nPort:                6379/TCP\nTargetPort:        agnhost-server/TCP\nEndpoints:         10.44.0.3:6379\nSession Affinity:  None\nEvents:            \n"
Jan 25 22:12:02.150: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node jerma-node'
Jan 25 22:12:02.347: INFO: stderr: ""
Jan 25 22:12:02.348: INFO: stdout: "Name:               jerma-node\nRoles:              \nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=jerma-node\n                    kubernetes.io/os=linux\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Sat, 04 Jan 2020 11:59:52 +0000\nTaints:             \nUnschedulable:      false\nLease:\n  HolderIdentity:  jerma-node\n  AcquireTime:     \n  RenewTime:       Sat, 25 Jan 2020 22:12:00 +0000\nConditions:\n  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----                 ------  -----------------                 ------------------                ------                       -------\n  NetworkUnavailable   False   Sat, 04 Jan 2020 12:00:49 +0000   Sat, 04 Jan 2020 12:00:49 +0000   WeaveIsUp                    Weave pod has set this\n  MemoryPressure       False   Sat, 25 Jan 2020 22:09:23 +0000   Sat, 04 Jan 2020 11:59:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure         False   Sat, 25 Jan 2020 22:09:23 +0000   Sat, 04 Jan 2020 11:59:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure          False   Sat, 25 Jan 2020 22:09:23 +0000   Sat, 04 Jan 2020 11:59:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready                True    Sat, 25 Jan 2020 22:09:23 +0000   Sat, 04 Jan 2020 12:00:52 +0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled\nAddresses:\n  InternalIP:  10.96.2.250\n  Hostname:    jerma-node\nCapacity:\n  cpu:                4\n  ephemeral-storage:  20145724Ki\n  hugepages-2Mi:      0\n  memory:             4039076Ki\n  pods:               110\nAllocatable:\n  cpu:                4\n  ephemeral-storage:  18566299208\n  hugepages-2Mi:      0\n  memory:             3936676Ki\n  pods:               110\nSystem Info:\n  Machine ID:                 bdc16344252549dd902c3a5d68b22f41\n  System UUID:                BDC16344-2525-49DD-902C-3A5D68B22F41\n  Boot ID:                    eec61fc4-8bf6-487f-8f93-ea9731fe757a\n  Kernel Version:             4.15.0-52-generic\n  OS Image:                   Ubuntu 18.04.2 LTS\n  Operating System:           linux\n  Architecture:               amd64\n  Container Runtime Version:  docker://18.9.7\n  Kubelet Version:            v1.17.0\n  Kube-Proxy Version:         v1.17.0\nNon-terminated Pods:          (3 in total)\n  Namespace                   Name                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                   ----                    ------------  ----------  ---------------  -------------  ---\n  kube-system                 kube-proxy-dsf66        0 (0%)        0 (0%)      0 (0%)           0 (0%)         21d\n  kube-system                 weave-net-kz8lv         20m (0%)      0 (0%)      0 (0%)           0 (0%)         21d\n  kubectl-3082                agnhost-master-wqqtf    0 (0%)        0 (0%)      0 (0%)           0 (0%)         13s\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests  Limits\n  --------           --------  ------\n  cpu                20m (0%)  0 (0%)\n  memory             0 (0%)    0 (0%)\n  ephemeral-storage  0 (0%)    0 (0%)\nEvents:              \n"
Jan 25 22:12:02.348: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-3082'
Jan 25 22:12:02.452: INFO: stderr: ""
Jan 25 22:12:02.452: INFO: stdout: "Name:         kubectl-3082\nLabels:       e2e-framework=kubectl\n              e2e-run=fe3a7429-6068-41ef-9683-277f8aa0278c\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nNo LimitRange resource.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:12:02.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3082" for this suite.

• [SLOW TEST:13.814 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods  [Conformance]","total":278,"completed":144,"skipped":2421,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period 
  should be submitted and removed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:12:02.465: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Delete Grace Period
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:46
[It] should be submitted and removed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: setting up selector
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
Jan 25 22:12:12.693: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0'
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
Jan 25 22:12:22.880: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:12:22.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3340" for this suite.

• [SLOW TEST:20.437 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  [k8s.io] Delete Grace Period
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    should be submitted and removed [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance]","total":278,"completed":145,"skipped":2455,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:12:22.904: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jan 25 22:12:23.056: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7f153559-006b-46b0-83d5-ae57bfeed28a" in namespace "downward-api-8243" to be "success or failure"
Jan 25 22:12:23.062: INFO: Pod "downwardapi-volume-7f153559-006b-46b0-83d5-ae57bfeed28a": Phase="Pending", Reason="", readiness=false. Elapsed: 5.774313ms
Jan 25 22:12:25.071: INFO: Pod "downwardapi-volume-7f153559-006b-46b0-83d5-ae57bfeed28a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01531438s
Jan 25 22:12:27.102: INFO: Pod "downwardapi-volume-7f153559-006b-46b0-83d5-ae57bfeed28a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045968625s
Jan 25 22:12:29.109: INFO: Pod "downwardapi-volume-7f153559-006b-46b0-83d5-ae57bfeed28a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.052905121s
Jan 25 22:12:31.115: INFO: Pod "downwardapi-volume-7f153559-006b-46b0-83d5-ae57bfeed28a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.058627188s
STEP: Saw pod success
Jan 25 22:12:31.115: INFO: Pod "downwardapi-volume-7f153559-006b-46b0-83d5-ae57bfeed28a" satisfied condition "success or failure"
Jan 25 22:12:31.118: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-7f153559-006b-46b0-83d5-ae57bfeed28a container client-container: 
STEP: delete the pod
Jan 25 22:12:31.160: INFO: Waiting for pod downwardapi-volume-7f153559-006b-46b0-83d5-ae57bfeed28a to disappear
Jan 25 22:12:31.167: INFO: Pod downwardapi-volume-7f153559-006b-46b0-83d5-ae57bfeed28a no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:12:31.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8243" for this suite.

• [SLOW TEST:8.272 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":146,"skipped":2473,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should adopt matching orphans and release non-matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:12:31.177: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching orphans and release non-matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: Orphaning one of the Job's Pods
Jan 25 22:12:43.879: INFO: Successfully updated pod "adopt-release-8v5gc"
STEP: Checking that the Job readopts the Pod
Jan 25 22:12:43.880: INFO: Waiting up to 15m0s for pod "adopt-release-8v5gc" in namespace "job-4842" to be "adopted"
Jan 25 22:12:43.891: INFO: Pod "adopt-release-8v5gc": Phase="Running", Reason="", readiness=true. Elapsed: 11.562569ms
Jan 25 22:12:45.910: INFO: Pod "adopt-release-8v5gc": Phase="Running", Reason="", readiness=true. Elapsed: 2.030488599s
Jan 25 22:12:45.911: INFO: Pod "adopt-release-8v5gc" satisfied condition "adopted"
STEP: Removing the labels from the Job's Pod
Jan 25 22:12:46.427: INFO: Successfully updated pod "adopt-release-8v5gc"
STEP: Checking that the Job releases the Pod
Jan 25 22:12:46.428: INFO: Waiting up to 15m0s for pod "adopt-release-8v5gc" in namespace "job-4842" to be "released"
Jan 25 22:12:46.476: INFO: Pod "adopt-release-8v5gc": Phase="Running", Reason="", readiness=true. Elapsed: 48.037182ms
Jan 25 22:12:46.476: INFO: Pod "adopt-release-8v5gc" satisfied condition "released"
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:12:46.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-4842" for this suite.

• [SLOW TEST:15.338 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching orphans and release non-matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":278,"completed":147,"skipped":2497,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:12:46.517: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap configmap-787/configmap-test-7a326d1c-cccd-458a-8862-f56d461f223f
STEP: Creating a pod to test consume configMaps
Jan 25 22:12:46.686: INFO: Waiting up to 5m0s for pod "pod-configmaps-fce3b6a5-20ab-4a8c-9a3f-42f6fb55ee0c" in namespace "configmap-787" to be "success or failure"
Jan 25 22:12:46.698: INFO: Pod "pod-configmaps-fce3b6a5-20ab-4a8c-9a3f-42f6fb55ee0c": Phase="Pending", Reason="", readiness=false. Elapsed: 12.144852ms
Jan 25 22:12:48.704: INFO: Pod "pod-configmaps-fce3b6a5-20ab-4a8c-9a3f-42f6fb55ee0c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018359845s
Jan 25 22:12:50.712: INFO: Pod "pod-configmaps-fce3b6a5-20ab-4a8c-9a3f-42f6fb55ee0c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025806215s
Jan 25 22:12:52.719: INFO: Pod "pod-configmaps-fce3b6a5-20ab-4a8c-9a3f-42f6fb55ee0c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.032441748s
Jan 25 22:12:54.724: INFO: Pod "pod-configmaps-fce3b6a5-20ab-4a8c-9a3f-42f6fb55ee0c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.038413533s
Jan 25 22:12:56.747: INFO: Pod "pod-configmaps-fce3b6a5-20ab-4a8c-9a3f-42f6fb55ee0c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.060669149s
Jan 25 22:12:58.757: INFO: Pod "pod-configmaps-fce3b6a5-20ab-4a8c-9a3f-42f6fb55ee0c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.070980573s
STEP: Saw pod success
Jan 25 22:12:58.757: INFO: Pod "pod-configmaps-fce3b6a5-20ab-4a8c-9a3f-42f6fb55ee0c" satisfied condition "success or failure"
Jan 25 22:12:58.760: INFO: Trying to get logs from node jerma-node pod pod-configmaps-fce3b6a5-20ab-4a8c-9a3f-42f6fb55ee0c container env-test: 
STEP: delete the pod
Jan 25 22:12:58.819: INFO: Waiting for pod pod-configmaps-fce3b6a5-20ab-4a8c-9a3f-42f6fb55ee0c to disappear
Jan 25 22:12:58.831: INFO: Pod pod-configmaps-fce3b6a5-20ab-4a8c-9a3f-42f6fb55ee0c no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:12:58.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-787" for this suite.

• [SLOW TEST:12.321 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":148,"skipped":2497,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:12:58.838: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ConfigMap
STEP: Ensuring resource quota status captures configMap creation
STEP: Deleting a ConfigMap
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:13:15.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-9132" for this suite.

• [SLOW TEST:16.277 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":278,"completed":149,"skipped":2499,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:13:15.116: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Jan 25 22:13:15.892: INFO: Pod name wrapped-volume-race-f07e0ed2-588f-4320-bff7-0d5152ca2138: Found 0 pods out of 5
Jan 25 22:13:20.903: INFO: Pod name wrapped-volume-race-f07e0ed2-588f-4320-bff7-0d5152ca2138: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-f07e0ed2-588f-4320-bff7-0d5152ca2138 in namespace emptydir-wrapper-5013, will wait for the garbage collector to delete the pods
Jan 25 22:13:51.074: INFO: Deleting ReplicationController wrapped-volume-race-f07e0ed2-588f-4320-bff7-0d5152ca2138 took: 7.494035ms
Jan 25 22:13:51.474: INFO: Terminating ReplicationController wrapped-volume-race-f07e0ed2-588f-4320-bff7-0d5152ca2138 pods took: 400.553456ms
STEP: Creating RC which spawns configmap-volume pods
Jan 25 22:14:12.609: INFO: Pod name wrapped-volume-race-322c3539-bb08-4a72-b9fe-f8e81fb06524: Found 0 pods out of 5
Jan 25 22:14:17.620: INFO: Pod name wrapped-volume-race-322c3539-bb08-4a72-b9fe-f8e81fb06524: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-322c3539-bb08-4a72-b9fe-f8e81fb06524 in namespace emptydir-wrapper-5013, will wait for the garbage collector to delete the pods
Jan 25 22:14:55.716: INFO: Deleting ReplicationController wrapped-volume-race-322c3539-bb08-4a72-b9fe-f8e81fb06524 took: 12.961792ms
Jan 25 22:14:56.217: INFO: Terminating ReplicationController wrapped-volume-race-322c3539-bb08-4a72-b9fe-f8e81fb06524 pods took: 501.05223ms
STEP: Creating RC which spawns configmap-volume pods
Jan 25 22:15:14.273: INFO: Pod name wrapped-volume-race-030ebfef-5747-4b07-8ded-f985535a40e7: Found 0 pods out of 5
Jan 25 22:15:19.283: INFO: Pod name wrapped-volume-race-030ebfef-5747-4b07-8ded-f985535a40e7: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-030ebfef-5747-4b07-8ded-f985535a40e7 in namespace emptydir-wrapper-5013, will wait for the garbage collector to delete the pods
Jan 25 22:15:49.396: INFO: Deleting ReplicationController wrapped-volume-race-030ebfef-5747-4b07-8ded-f985535a40e7 took: 21.419037ms
Jan 25 22:15:49.796: INFO: Terminating ReplicationController wrapped-volume-race-030ebfef-5747-4b07-8ded-f985535a40e7 pods took: 400.753482ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:16:05.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-5013" for this suite.

• [SLOW TEST:170.022 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":278,"completed":150,"skipped":2530,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:16:05.139: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-0312fc41-fa2f-471a-9af6-39acf4062d92
STEP: Creating a pod to test consume configMaps
Jan 25 22:16:05.343: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6f9474e0-54b9-4eaf-8a98-c5907ceb0e47" in namespace "projected-5513" to be "success or failure"
Jan 25 22:16:05.383: INFO: Pod "pod-projected-configmaps-6f9474e0-54b9-4eaf-8a98-c5907ceb0e47": Phase="Pending", Reason="", readiness=false. Elapsed: 39.740468ms
Jan 25 22:16:07.390: INFO: Pod "pod-projected-configmaps-6f9474e0-54b9-4eaf-8a98-c5907ceb0e47": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046321781s
Jan 25 22:16:09.398: INFO: Pod "pod-projected-configmaps-6f9474e0-54b9-4eaf-8a98-c5907ceb0e47": Phase="Pending", Reason="", readiness=false. Elapsed: 4.054354536s
Jan 25 22:16:11.435: INFO: Pod "pod-projected-configmaps-6f9474e0-54b9-4eaf-8a98-c5907ceb0e47": Phase="Pending", Reason="", readiness=false. Elapsed: 6.091359178s
Jan 25 22:16:13.455: INFO: Pod "pod-projected-configmaps-6f9474e0-54b9-4eaf-8a98-c5907ceb0e47": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.111399785s
STEP: Saw pod success
Jan 25 22:16:13.455: INFO: Pod "pod-projected-configmaps-6f9474e0-54b9-4eaf-8a98-c5907ceb0e47" satisfied condition "success or failure"
Jan 25 22:16:13.471: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-6f9474e0-54b9-4eaf-8a98-c5907ceb0e47 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 25 22:16:13.814: INFO: Waiting for pod pod-projected-configmaps-6f9474e0-54b9-4eaf-8a98-c5907ceb0e47 to disappear
Jan 25 22:16:13.834: INFO: Pod pod-projected-configmaps-6f9474e0-54b9-4eaf-8a98-c5907ceb0e47 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:16:13.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5513" for this suite.

• [SLOW TEST:8.801 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":151,"skipped":2533,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:16:13.941: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:329
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a replication controller
Jan 25 22:16:14.309: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4052'
Jan 25 22:16:18.288: INFO: stderr: ""
Jan 25 22:16:18.288: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 25 22:16:18.289: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4052'
Jan 25 22:16:19.169: INFO: stderr: ""
Jan 25 22:16:19.169: INFO: stdout: ""
STEP: Replicas for name=update-demo: expected=2 actual=0
Jan 25 22:16:24.170: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4052'
Jan 25 22:16:24.284: INFO: stderr: ""
Jan 25 22:16:24.284: INFO: stdout: "update-demo-nautilus-wgx5w update-demo-nautilus-x9zjm "
Jan 25 22:16:24.285: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wgx5w -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4052'
Jan 25 22:16:24.419: INFO: stderr: ""
Jan 25 22:16:24.419: INFO: stdout: ""
Jan 25 22:16:24.419: INFO: update-demo-nautilus-wgx5w is created but not running
Jan 25 22:16:29.420: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4052'
Jan 25 22:16:29.628: INFO: stderr: ""
Jan 25 22:16:29.628: INFO: stdout: "update-demo-nautilus-wgx5w update-demo-nautilus-x9zjm "
Jan 25 22:16:29.628: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wgx5w -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4052'
Jan 25 22:16:29.774: INFO: stderr: ""
Jan 25 22:16:29.774: INFO: stdout: "true"
Jan 25 22:16:29.774: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wgx5w -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4052'
Jan 25 22:16:29.943: INFO: stderr: ""
Jan 25 22:16:29.943: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 25 22:16:29.943: INFO: validating pod update-demo-nautilus-wgx5w
Jan 25 22:16:29.951: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 25 22:16:29.951: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 25 22:16:29.951: INFO: update-demo-nautilus-wgx5w is verified up and running
Jan 25 22:16:29.951: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-x9zjm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4052'
Jan 25 22:16:30.050: INFO: stderr: ""
Jan 25 22:16:30.050: INFO: stdout: "true"
Jan 25 22:16:30.050: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-x9zjm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4052'
Jan 25 22:16:30.172: INFO: stderr: ""
Jan 25 22:16:30.172: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 25 22:16:30.173: INFO: validating pod update-demo-nautilus-x9zjm
Jan 25 22:16:30.182: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 25 22:16:30.182: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 25 22:16:30.182: INFO: update-demo-nautilus-x9zjm is verified up and running
STEP: scaling down the replication controller
Jan 25 22:16:30.184: INFO: scanned /root for discovery docs: 
Jan 25 22:16:30.184: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-4052'
Jan 25 22:16:31.337: INFO: stderr: ""
Jan 25 22:16:31.337: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 25 22:16:31.337: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4052'
Jan 25 22:16:31.498: INFO: stderr: ""
Jan 25 22:16:31.498: INFO: stdout: "update-demo-nautilus-wgx5w update-demo-nautilus-x9zjm "
STEP: Replicas for name=update-demo: expected=1 actual=2
Jan 25 22:16:36.500: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4052'
Jan 25 22:16:36.641: INFO: stderr: ""
Jan 25 22:16:36.641: INFO: stdout: "update-demo-nautilus-wgx5w update-demo-nautilus-x9zjm "
STEP: Replicas for name=update-demo: expected=1 actual=2
Jan 25 22:16:41.643: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4052'
Jan 25 22:16:41.815: INFO: stderr: ""
Jan 25 22:16:41.816: INFO: stdout: "update-demo-nautilus-wgx5w update-demo-nautilus-x9zjm "
STEP: Replicas for name=update-demo: expected=1 actual=2
Jan 25 22:16:46.816: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4052'
Jan 25 22:16:46.983: INFO: stderr: ""
Jan 25 22:16:46.983: INFO: stdout: "update-demo-nautilus-wgx5w "
Jan 25 22:16:46.983: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wgx5w -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4052'
Jan 25 22:16:47.110: INFO: stderr: ""
Jan 25 22:16:47.110: INFO: stdout: "true"
Jan 25 22:16:47.110: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wgx5w -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4052'
Jan 25 22:16:47.207: INFO: stderr: ""
Jan 25 22:16:47.207: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 25 22:16:47.207: INFO: validating pod update-demo-nautilus-wgx5w
Jan 25 22:16:47.214: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 25 22:16:47.214: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 25 22:16:47.214: INFO: update-demo-nautilus-wgx5w is verified up and running
STEP: scaling up the replication controller
Jan 25 22:16:47.217: INFO: scanned /root for discovery docs: 
Jan 25 22:16:47.217: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-4052'
Jan 25 22:16:48.604: INFO: stderr: ""
Jan 25 22:16:48.604: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 25 22:16:48.605: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4052'
Jan 25 22:16:48.990: INFO: stderr: ""
Jan 25 22:16:48.990: INFO: stdout: "update-demo-nautilus-wgx5w update-demo-nautilus-wp77h "
Jan 25 22:16:48.991: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wgx5w -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4052'
Jan 25 22:16:49.165: INFO: stderr: ""
Jan 25 22:16:49.165: INFO: stdout: "true"
Jan 25 22:16:49.165: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wgx5w -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4052'
Jan 25 22:16:49.530: INFO: stderr: ""
Jan 25 22:16:49.531: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 25 22:16:49.531: INFO: validating pod update-demo-nautilus-wgx5w
Jan 25 22:16:49.542: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 25 22:16:49.542: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 25 22:16:49.542: INFO: update-demo-nautilus-wgx5w is verified up and running
Jan 25 22:16:49.542: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wp77h -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4052'
Jan 25 22:16:49.671: INFO: stderr: ""
Jan 25 22:16:49.672: INFO: stdout: ""
Jan 25 22:16:49.672: INFO: update-demo-nautilus-wp77h is created but not running
Jan 25 22:16:54.673: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4052'
Jan 25 22:16:54.864: INFO: stderr: ""
Jan 25 22:16:54.865: INFO: stdout: "update-demo-nautilus-wgx5w update-demo-nautilus-wp77h "
Jan 25 22:16:54.865: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wgx5w -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4052'
Jan 25 22:16:55.075: INFO: stderr: ""
Jan 25 22:16:55.075: INFO: stdout: "true"
Jan 25 22:16:55.076: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wgx5w -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4052'
Jan 25 22:16:55.200: INFO: stderr: ""
Jan 25 22:16:55.200: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 25 22:16:55.200: INFO: validating pod update-demo-nautilus-wgx5w
Jan 25 22:16:55.207: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 25 22:16:55.208: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 25 22:16:55.208: INFO: update-demo-nautilus-wgx5w is verified up and running
Jan 25 22:16:55.208: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wp77h -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4052'
Jan 25 22:16:55.314: INFO: stderr: ""
Jan 25 22:16:55.315: INFO: stdout: "true"
Jan 25 22:16:55.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wp77h -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4052'
Jan 25 22:16:55.451: INFO: stderr: ""
Jan 25 22:16:55.452: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 25 22:16:55.452: INFO: validating pod update-demo-nautilus-wp77h
Jan 25 22:16:55.457: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 25 22:16:55.458: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 25 22:16:55.458: INFO: update-demo-nautilus-wp77h is verified up and running
STEP: using delete to clean up resources
Jan 25 22:16:55.458: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4052'
Jan 25 22:16:55.723: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 25 22:16:55.723: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Jan 25 22:16:55.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-4052'
Jan 25 22:16:55.871: INFO: stderr: "No resources found in kubectl-4052 namespace.\n"
Jan 25 22:16:55.871: INFO: stdout: ""
Jan 25 22:16:55.872: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-4052 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan 25 22:16:55.991: INFO: stderr: ""
Jan 25 22:16:55.991: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:16:55.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4052" for this suite.

• [SLOW TEST:42.059 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:327
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","total":278,"completed":152,"skipped":2566,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:16:56.000: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Performing setup for networking test in namespace pod-network-test-348
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan 25 22:16:56.080: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan 25 22:17:38.318: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostname&protocol=http&host=10.44.0.1&port=8080&tries=1'] Namespace:pod-network-test-348 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 25 22:17:38.318: INFO: >>> kubeConfig: /root/.kube/config
I0125 22:17:38.382225       8 log.go:172] (0xc00056a790) (0xc00129a5a0) Create stream
I0125 22:17:38.382377       8 log.go:172] (0xc00056a790) (0xc00129a5a0) Stream added, broadcasting: 1
I0125 22:17:38.387120       8 log.go:172] (0xc00056a790) Reply frame received for 1
I0125 22:17:38.387170       8 log.go:172] (0xc00056a790) (0xc00129a820) Create stream
I0125 22:17:38.387182       8 log.go:172] (0xc00056a790) (0xc00129a820) Stream added, broadcasting: 3
I0125 22:17:38.389398       8 log.go:172] (0xc00056a790) Reply frame received for 3
I0125 22:17:38.389498       8 log.go:172] (0xc00056a790) (0xc0012db9a0) Create stream
I0125 22:17:38.389510       8 log.go:172] (0xc00056a790) (0xc0012db9a0) Stream added, broadcasting: 5
I0125 22:17:38.392797       8 log.go:172] (0xc00056a790) Reply frame received for 5
I0125 22:17:38.496170       8 log.go:172] (0xc00056a790) Data frame received for 3
I0125 22:17:38.496727       8 log.go:172] (0xc00129a820) (3) Data frame handling
I0125 22:17:38.496808       8 log.go:172] (0xc00129a820) (3) Data frame sent
I0125 22:17:38.664795       8 log.go:172] (0xc00056a790) (0xc00129a820) Stream removed, broadcasting: 3
I0125 22:17:38.665236       8 log.go:172] (0xc00056a790) Data frame received for 1
I0125 22:17:38.665268       8 log.go:172] (0xc00129a5a0) (1) Data frame handling
I0125 22:17:38.665363       8 log.go:172] (0xc00129a5a0) (1) Data frame sent
I0125 22:17:38.665444       8 log.go:172] (0xc00056a790) (0xc0012db9a0) Stream removed, broadcasting: 5
I0125 22:17:38.665566       8 log.go:172] (0xc00056a790) (0xc00129a5a0) Stream removed, broadcasting: 1
I0125 22:17:38.665621       8 log.go:172] (0xc00056a790) Go away received
I0125 22:17:38.666284       8 log.go:172] (0xc00056a790) (0xc00129a5a0) Stream removed, broadcasting: 1
I0125 22:17:38.666307       8 log.go:172] (0xc00056a790) (0xc00129a820) Stream removed, broadcasting: 3
I0125 22:17:38.666322       8 log.go:172] (0xc00056a790) (0xc0012db9a0) Stream removed, broadcasting: 5
Jan 25 22:17:38.666: INFO: Waiting for responses: map[]
Jan 25 22:17:38.686: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostname&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:pod-network-test-348 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 25 22:17:38.686: INFO: >>> kubeConfig: /root/.kube/config
I0125 22:17:38.742566       8 log.go:172] (0xc00293db80) (0xc001c4e000) Create stream
I0125 22:17:38.742966       8 log.go:172] (0xc00293db80) (0xc001c4e000) Stream added, broadcasting: 1
I0125 22:17:38.750389       8 log.go:172] (0xc00293db80) Reply frame received for 1
I0125 22:17:38.750660       8 log.go:172] (0xc00293db80) (0xc002a020a0) Create stream
I0125 22:17:38.750713       8 log.go:172] (0xc00293db80) (0xc002a020a0) Stream added, broadcasting: 3
I0125 22:17:38.758966       8 log.go:172] (0xc00293db80) Reply frame received for 3
I0125 22:17:38.759132       8 log.go:172] (0xc00293db80) (0xc00116da40) Create stream
I0125 22:17:38.759175       8 log.go:172] (0xc00293db80) (0xc00116da40) Stream added, broadcasting: 5
I0125 22:17:38.762204       8 log.go:172] (0xc00293db80) Reply frame received for 5
I0125 22:17:38.928953       8 log.go:172] (0xc00293db80) Data frame received for 3
I0125 22:17:38.929504       8 log.go:172] (0xc002a020a0) (3) Data frame handling
I0125 22:17:38.929548       8 log.go:172] (0xc002a020a0) (3) Data frame sent
I0125 22:17:39.057273       8 log.go:172] (0xc00293db80) (0xc002a020a0) Stream removed, broadcasting: 3
I0125 22:17:39.057626       8 log.go:172] (0xc00293db80) Data frame received for 1
I0125 22:17:39.057651       8 log.go:172] (0xc001c4e000) (1) Data frame handling
I0125 22:17:39.057675       8 log.go:172] (0xc001c4e000) (1) Data frame sent
I0125 22:17:39.057780       8 log.go:172] (0xc00293db80) (0xc001c4e000) Stream removed, broadcasting: 1
I0125 22:17:39.058202       8 log.go:172] (0xc00293db80) (0xc00116da40) Stream removed, broadcasting: 5
I0125 22:17:39.058244       8 log.go:172] (0xc00293db80) (0xc001c4e000) Stream removed, broadcasting: 1
I0125 22:17:39.058257       8 log.go:172] (0xc00293db80) (0xc002a020a0) Stream removed, broadcasting: 3
I0125 22:17:39.058266       8 log.go:172] (0xc00293db80) (0xc00116da40) Stream removed, broadcasting: 5
I0125 22:17:39.059260       8 log.go:172] (0xc00293db80) Go away received
Jan 25 22:17:39.059: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:17:39.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-348" for this suite.

• [SLOW TEST:43.074 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":153,"skipped":2576,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:17:39.076: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating secret secrets-6401/secret-test-a2df3be8-206f-4bb6-ac43-1ad2084f603e
STEP: Creating a pod to test consume secrets
Jan 25 22:17:39.316: INFO: Waiting up to 5m0s for pod "pod-configmaps-dd909384-ece1-46e1-bd74-30745e126c89" in namespace "secrets-6401" to be "success or failure"
Jan 25 22:17:39.332: INFO: Pod "pod-configmaps-dd909384-ece1-46e1-bd74-30745e126c89": Phase="Pending", Reason="", readiness=false. Elapsed: 16.688365ms
Jan 25 22:17:41.344: INFO: Pod "pod-configmaps-dd909384-ece1-46e1-bd74-30745e126c89": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028133031s
Jan 25 22:17:43.351: INFO: Pod "pod-configmaps-dd909384-ece1-46e1-bd74-30745e126c89": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035529953s
Jan 25 22:17:45.538: INFO: Pod "pod-configmaps-dd909384-ece1-46e1-bd74-30745e126c89": Phase="Pending", Reason="", readiness=false. Elapsed: 6.221979382s
Jan 25 22:17:48.269: INFO: Pod "pod-configmaps-dd909384-ece1-46e1-bd74-30745e126c89": Phase="Pending", Reason="", readiness=false. Elapsed: 8.95325625s
Jan 25 22:17:50.278: INFO: Pod "pod-configmaps-dd909384-ece1-46e1-bd74-30745e126c89": Phase="Pending", Reason="", readiness=false. Elapsed: 10.961905804s
Jan 25 22:17:52.289: INFO: Pod "pod-configmaps-dd909384-ece1-46e1-bd74-30745e126c89": Phase="Pending", Reason="", readiness=false. Elapsed: 12.973662501s
Jan 25 22:17:54.298: INFO: Pod "pod-configmaps-dd909384-ece1-46e1-bd74-30745e126c89": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.982133055s
STEP: Saw pod success
Jan 25 22:17:54.298: INFO: Pod "pod-configmaps-dd909384-ece1-46e1-bd74-30745e126c89" satisfied condition "success or failure"
Jan 25 22:17:54.304: INFO: Trying to get logs from node jerma-node pod pod-configmaps-dd909384-ece1-46e1-bd74-30745e126c89 container env-test: 
STEP: delete the pod
Jan 25 22:17:54.379: INFO: Waiting for pod pod-configmaps-dd909384-ece1-46e1-bd74-30745e126c89 to disappear
Jan 25 22:17:54.384: INFO: Pod pod-configmaps-dd909384-ece1-46e1-bd74-30745e126c89 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:17:54.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-6401" for this suite.

• [SLOW TEST:15.330 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":154,"skipped":2703,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:17:54.407: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:18:01.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-4208" for this suite.
STEP: Destroying namespace "nsdeletetest-4625" for this suite.
Jan 25 22:18:01.131: INFO: Namespace nsdeletetest-4625 was already deleted
STEP: Destroying namespace "nsdeletetest-8490" for this suite.

• [SLOW TEST:6.756 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":278,"completed":155,"skipped":2721,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:18:01.164: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-fe3ad446-a900-4ae6-a911-904c0d87466d
STEP: Creating a pod to test consume secrets
Jan 25 22:18:01.313: INFO: Waiting up to 5m0s for pod "pod-secrets-86b5a21e-5050-44f3-a880-f86cf525f1e8" in namespace "secrets-4578" to be "success or failure"
Jan 25 22:18:01.345: INFO: Pod "pod-secrets-86b5a21e-5050-44f3-a880-f86cf525f1e8": Phase="Pending", Reason="", readiness=false. Elapsed: 30.153044ms
Jan 25 22:18:03.354: INFO: Pod "pod-secrets-86b5a21e-5050-44f3-a880-f86cf525f1e8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039242003s
Jan 25 22:18:05.362: INFO: Pod "pod-secrets-86b5a21e-5050-44f3-a880-f86cf525f1e8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047750591s
Jan 25 22:18:07.381: INFO: Pod "pod-secrets-86b5a21e-5050-44f3-a880-f86cf525f1e8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.066752076s
Jan 25 22:18:09.438: INFO: Pod "pod-secrets-86b5a21e-5050-44f3-a880-f86cf525f1e8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.122877155s
STEP: Saw pod success
Jan 25 22:18:09.438: INFO: Pod "pod-secrets-86b5a21e-5050-44f3-a880-f86cf525f1e8" satisfied condition "success or failure"
Jan 25 22:18:09.447: INFO: Trying to get logs from node jerma-node pod pod-secrets-86b5a21e-5050-44f3-a880-f86cf525f1e8 container secret-volume-test: 
STEP: delete the pod
Jan 25 22:18:09.585: INFO: Waiting for pod pod-secrets-86b5a21e-5050-44f3-a880-f86cf525f1e8 to disappear
Jan 25 22:18:09.614: INFO: Pod pod-secrets-86b5a21e-5050-44f3-a880-f86cf525f1e8 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:18:09.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4578" for this suite.

• [SLOW TEST:8.469 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":156,"skipped":2727,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:18:09.634: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 25 22:18:09.829: INFO: Create a RollingUpdate DaemonSet
Jan 25 22:18:09.837: INFO: Check that daemon pods launch on every node of the cluster
Jan 25 22:18:09.911: INFO: Number of nodes with available pods: 0
Jan 25 22:18:09.911: INFO: Node jerma-node is running more than one daemon pod
Jan 25 22:18:11.830: INFO: Number of nodes with available pods: 0
Jan 25 22:18:11.830: INFO: Node jerma-node is running more than one daemon pod
Jan 25 22:18:12.322: INFO: Number of nodes with available pods: 0
Jan 25 22:18:12.322: INFO: Node jerma-node is running more than one daemon pod
Jan 25 22:18:13.047: INFO: Number of nodes with available pods: 0
Jan 25 22:18:13.048: INFO: Node jerma-node is running more than one daemon pod
Jan 25 22:18:14.005: INFO: Number of nodes with available pods: 0
Jan 25 22:18:14.005: INFO: Node jerma-node is running more than one daemon pod
Jan 25 22:18:14.943: INFO: Number of nodes with available pods: 0
Jan 25 22:18:14.943: INFO: Node jerma-node is running more than one daemon pod
Jan 25 22:18:17.089: INFO: Number of nodes with available pods: 0
Jan 25 22:18:17.089: INFO: Node jerma-node is running more than one daemon pod
Jan 25 22:18:18.652: INFO: Number of nodes with available pods: 0
Jan 25 22:18:18.652: INFO: Node jerma-node is running more than one daemon pod
Jan 25 22:18:19.202: INFO: Number of nodes with available pods: 1
Jan 25 22:18:19.202: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 25 22:18:19.925: INFO: Number of nodes with available pods: 1
Jan 25 22:18:19.925: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 25 22:18:20.930: INFO: Number of nodes with available pods: 2
Jan 25 22:18:20.930: INFO: Number of running nodes: 2, number of available pods: 2
Jan 25 22:18:20.930: INFO: Update the DaemonSet to trigger a rollout
Jan 25 22:18:20.942: INFO: Updating DaemonSet daemon-set
Jan 25 22:18:34.009: INFO: Roll back the DaemonSet before rollout is complete
Jan 25 22:18:34.014: INFO: Updating DaemonSet daemon-set
Jan 25 22:18:34.014: INFO: Make sure DaemonSet rollback is complete
Jan 25 22:18:34.023: INFO: Wrong image for pod: daemon-set-54rph. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Jan 25 22:18:34.023: INFO: Pod daemon-set-54rph is not available
Jan 25 22:18:35.059: INFO: Wrong image for pod: daemon-set-54rph. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Jan 25 22:18:35.059: INFO: Pod daemon-set-54rph is not available
Jan 25 22:18:36.050: INFO: Wrong image for pod: daemon-set-54rph. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Jan 25 22:18:36.050: INFO: Pod daemon-set-54rph is not available
Jan 25 22:18:37.055: INFO: Wrong image for pod: daemon-set-54rph. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Jan 25 22:18:37.056: INFO: Pod daemon-set-54rph is not available
Jan 25 22:18:38.108: INFO: Wrong image for pod: daemon-set-54rph. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Jan 25 22:18:38.108: INFO: Pod daemon-set-54rph is not available
Jan 25 22:18:39.109: INFO: Pod daemon-set-n5rvq is not available
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7043, will wait for the garbage collector to delete the pods
Jan 25 22:18:39.381: INFO: Deleting DaemonSet.extensions daemon-set took: 205.728891ms
Jan 25 22:18:40.082: INFO: Terminating DaemonSet.extensions daemon-set pods took: 700.915928ms
Jan 25 22:18:52.388: INFO: Number of nodes with available pods: 0
Jan 25 22:18:52.388: INFO: Number of running nodes: 0, number of available pods: 0
Jan 25 22:18:52.392: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7043/daemonsets","resourceVersion":"4339312"},"items":null}

Jan 25 22:18:52.398: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7043/pods","resourceVersion":"4339312"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:18:52.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-7043" for this suite.

• [SLOW TEST:42.784 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":278,"completed":157,"skipped":2729,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:18:52.419: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to change the type from ClusterIP to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-7215
STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service
STEP: creating service externalsvc in namespace services-7215
STEP: creating replication controller externalsvc in namespace services-7215
I0125 22:18:52.620695       8 runners.go:189] Created replication controller with name: externalsvc, namespace: services-7215, replica count: 2
I0125 22:18:55.671643       8 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0125 22:18:58.672338       8 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0125 22:19:01.673054       8 runners.go:189] externalsvc Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0125 22:19:04.673470       8 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: changing the ClusterIP service to type=ExternalName
Jan 25 22:19:04.715: INFO: Creating new exec pod
Jan 25 22:19:12.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7215 execpodtk529 -- /bin/sh -x -c nslookup clusterip-service'
Jan 25 22:19:13.180: INFO: stderr: "I0125 22:19:12.989060    1906 log.go:172] (0xc000a22630) (0xc000a0e000) Create stream\nI0125 22:19:12.989461    1906 log.go:172] (0xc000a22630) (0xc000a0e000) Stream added, broadcasting: 1\nI0125 22:19:12.995603    1906 log.go:172] (0xc000a22630) Reply frame received for 1\nI0125 22:19:12.995676    1906 log.go:172] (0xc000a22630) (0xc000a0e0a0) Create stream\nI0125 22:19:12.995691    1906 log.go:172] (0xc000a22630) (0xc000a0e0a0) Stream added, broadcasting: 3\nI0125 22:19:12.997726    1906 log.go:172] (0xc000a22630) Reply frame received for 3\nI0125 22:19:12.997823    1906 log.go:172] (0xc000a22630) (0xc000a0e140) Create stream\nI0125 22:19:12.997840    1906 log.go:172] (0xc000a22630) (0xc000a0e140) Stream added, broadcasting: 5\nI0125 22:19:12.999901    1906 log.go:172] (0xc000a22630) Reply frame received for 5\nI0125 22:19:13.078780    1906 log.go:172] (0xc000a22630) Data frame received for 5\nI0125 22:19:13.078846    1906 log.go:172] (0xc000a0e140) (5) Data frame handling\nI0125 22:19:13.078866    1906 log.go:172] (0xc000a0e140) (5) Data frame sent\n+ nslookup clusterip-service\nI0125 22:19:13.096288    1906 log.go:172] (0xc000a22630) Data frame received for 3\nI0125 22:19:13.096310    1906 log.go:172] (0xc000a0e0a0) (3) Data frame handling\nI0125 22:19:13.096334    1906 log.go:172] (0xc000a0e0a0) (3) Data frame sent\nI0125 22:19:13.098851    1906 log.go:172] (0xc000a22630) Data frame received for 3\nI0125 22:19:13.098915    1906 log.go:172] (0xc000a0e0a0) (3) Data frame handling\nI0125 22:19:13.098943    1906 log.go:172] (0xc000a0e0a0) (3) Data frame sent\nI0125 22:19:13.169812    1906 log.go:172] (0xc000a22630) Data frame received for 1\nI0125 22:19:13.170045    1906 log.go:172] (0xc000a22630) (0xc000a0e0a0) Stream removed, broadcasting: 3\nI0125 22:19:13.170156    1906 log.go:172] (0xc000a0e000) (1) Data frame handling\nI0125 22:19:13.170189    1906 log.go:172] (0xc000a0e000) (1) Data frame sent\nI0125 22:19:13.170346    1906 log.go:172] (0xc000a22630) (0xc000a0e140) Stream removed, broadcasting: 5\nI0125 22:19:13.170428    1906 log.go:172] (0xc000a22630) (0xc000a0e000) Stream removed, broadcasting: 1\nI0125 22:19:13.170473    1906 log.go:172] (0xc000a22630) Go away received\nI0125 22:19:13.172045    1906 log.go:172] (0xc000a22630) (0xc000a0e000) Stream removed, broadcasting: 1\nI0125 22:19:13.172066    1906 log.go:172] (0xc000a22630) (0xc000a0e0a0) Stream removed, broadcasting: 3\nI0125 22:19:13.172079    1906 log.go:172] (0xc000a22630) (0xc000a0e140) Stream removed, broadcasting: 5\n"
Jan 25 22:19:13.180: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-7215.svc.cluster.local\tcanonical name = externalsvc.services-7215.svc.cluster.local.\nName:\texternalsvc.services-7215.svc.cluster.local\nAddress: 10.96.236.21\n\n"
STEP: deleting ReplicationController externalsvc in namespace services-7215, will wait for the garbage collector to delete the pods
Jan 25 22:19:13.262: INFO: Deleting ReplicationController externalsvc took: 28.119348ms
Jan 25 22:19:13.563: INFO: Terminating ReplicationController externalsvc pods took: 300.893363ms
Jan 25 22:19:32.396: INFO: Cleaning up the ClusterIP to ExternalName test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:19:32.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-7215" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:40.072 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":278,"completed":158,"skipped":2736,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:19:32.493: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0777 on node default medium
Jan 25 22:19:32.599: INFO: Waiting up to 5m0s for pod "pod-edb3407c-6c4c-4ffd-a823-d6d430d934d2" in namespace "emptydir-5399" to be "success or failure"
Jan 25 22:19:32.608: INFO: Pod "pod-edb3407c-6c4c-4ffd-a823-d6d430d934d2": Phase="Pending", Reason="", readiness=false. Elapsed: 7.968115ms
Jan 25 22:19:34.625: INFO: Pod "pod-edb3407c-6c4c-4ffd-a823-d6d430d934d2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025245443s
Jan 25 22:19:36.633: INFO: Pod "pod-edb3407c-6c4c-4ffd-a823-d6d430d934d2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033919097s
Jan 25 22:19:38.646: INFO: Pod "pod-edb3407c-6c4c-4ffd-a823-d6d430d934d2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046753597s
Jan 25 22:19:40.652: INFO: Pod "pod-edb3407c-6c4c-4ffd-a823-d6d430d934d2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.052102165s
Jan 25 22:19:42.658: INFO: Pod "pod-edb3407c-6c4c-4ffd-a823-d6d430d934d2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.058043689s
STEP: Saw pod success
Jan 25 22:19:42.658: INFO: Pod "pod-edb3407c-6c4c-4ffd-a823-d6d430d934d2" satisfied condition "success or failure"
Jan 25 22:19:42.660: INFO: Trying to get logs from node jerma-node pod pod-edb3407c-6c4c-4ffd-a823-d6d430d934d2 container test-container: 
STEP: delete the pod
Jan 25 22:19:42.765: INFO: Waiting for pod pod-edb3407c-6c4c-4ffd-a823-d6d430d934d2 to disappear
Jan 25 22:19:42.768: INFO: Pod pod-edb3407c-6c4c-4ffd-a823-d6d430d934d2 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:19:42.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5399" for this suite.

• [SLOW TEST:10.288 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":159,"skipped":2746,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:19:42.781: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
Jan 25 22:19:42.900: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:19:55.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-9805" for this suite.

• [SLOW TEST:12.914 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":278,"completed":160,"skipped":2748,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:19:55.696: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 25 22:19:55.860: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Jan 25 22:19:55.882: INFO: Number of nodes with available pods: 0
Jan 25 22:19:55.882: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Jan 25 22:19:55.966: INFO: Number of nodes with available pods: 0
Jan 25 22:19:55.966: INFO: Node jerma-node is running more than one daemon pod
Jan 25 22:19:56.970: INFO: Number of nodes with available pods: 0
Jan 25 22:19:56.970: INFO: Node jerma-node is running more than one daemon pod
Jan 25 22:19:57.973: INFO: Number of nodes with available pods: 0
Jan 25 22:19:57.973: INFO: Node jerma-node is running more than one daemon pod
Jan 25 22:19:58.971: INFO: Number of nodes with available pods: 0
Jan 25 22:19:58.971: INFO: Node jerma-node is running more than one daemon pod
Jan 25 22:19:59.971: INFO: Number of nodes with available pods: 0
Jan 25 22:19:59.971: INFO: Node jerma-node is running more than one daemon pod
Jan 25 22:20:00.979: INFO: Number of nodes with available pods: 0
Jan 25 22:20:00.979: INFO: Node jerma-node is running more than one daemon pod
Jan 25 22:20:01.973: INFO: Number of nodes with available pods: 0
Jan 25 22:20:01.973: INFO: Node jerma-node is running more than one daemon pod
Jan 25 22:20:03.016: INFO: Number of nodes with available pods: 0
Jan 25 22:20:03.016: INFO: Node jerma-node is running more than one daemon pod
Jan 25 22:20:03.977: INFO: Number of nodes with available pods: 1
Jan 25 22:20:03.977: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Jan 25 22:20:04.035: INFO: Number of nodes with available pods: 1
Jan 25 22:20:04.035: INFO: Number of running nodes: 0, number of available pods: 1
Jan 25 22:20:05.055: INFO: Number of nodes with available pods: 0
Jan 25 22:20:05.055: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Jan 25 22:20:05.091: INFO: Number of nodes with available pods: 0
Jan 25 22:20:05.091: INFO: Node jerma-node is running more than one daemon pod
Jan 25 22:20:06.097: INFO: Number of nodes with available pods: 0
Jan 25 22:20:06.097: INFO: Node jerma-node is running more than one daemon pod
Jan 25 22:20:07.106: INFO: Number of nodes with available pods: 0
Jan 25 22:20:07.106: INFO: Node jerma-node is running more than one daemon pod
Jan 25 22:20:08.099: INFO: Number of nodes with available pods: 0
Jan 25 22:20:08.099: INFO: Node jerma-node is running more than one daemon pod
Jan 25 22:20:09.098: INFO: Number of nodes with available pods: 0
Jan 25 22:20:09.098: INFO: Node jerma-node is running more than one daemon pod
Jan 25 22:20:10.099: INFO: Number of nodes with available pods: 0
Jan 25 22:20:10.099: INFO: Node jerma-node is running more than one daemon pod
Jan 25 22:20:11.100: INFO: Number of nodes with available pods: 0
Jan 25 22:20:11.100: INFO: Node jerma-node is running more than one daemon pod
Jan 25 22:20:12.099: INFO: Number of nodes with available pods: 0
Jan 25 22:20:12.100: INFO: Node jerma-node is running more than one daemon pod
Jan 25 22:20:13.098: INFO: Number of nodes with available pods: 0
Jan 25 22:20:13.098: INFO: Node jerma-node is running more than one daemon pod
Jan 25 22:20:14.098: INFO: Number of nodes with available pods: 0
Jan 25 22:20:14.098: INFO: Node jerma-node is running more than one daemon pod
Jan 25 22:20:15.098: INFO: Number of nodes with available pods: 0
Jan 25 22:20:15.098: INFO: Node jerma-node is running more than one daemon pod
Jan 25 22:20:16.115: INFO: Number of nodes with available pods: 0
Jan 25 22:20:16.115: INFO: Node jerma-node is running more than one daemon pod
Jan 25 22:20:17.106: INFO: Number of nodes with available pods: 0
Jan 25 22:20:17.107: INFO: Node jerma-node is running more than one daemon pod
Jan 25 22:20:18.099: INFO: Number of nodes with available pods: 1
Jan 25 22:20:18.099: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-863, will wait for the garbage collector to delete the pods
Jan 25 22:20:18.174: INFO: Deleting DaemonSet.extensions daemon-set took: 9.013088ms
Jan 25 22:20:18.474: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.586741ms
Jan 25 22:20:24.482: INFO: Number of nodes with available pods: 0
Jan 25 22:20:24.482: INFO: Number of running nodes: 0, number of available pods: 0
Jan 25 22:20:24.486: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-863/daemonsets","resourceVersion":"4339739"},"items":null}

Jan 25 22:20:24.532: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-863/pods","resourceVersion":"4339739"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:20:24.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-863" for this suite.

• [SLOW TEST:28.889 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":278,"completed":161,"skipped":2765,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:20:24.587: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 25 22:20:24.686: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:20:33.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2267" for this suite.

• [SLOW TEST:8.436 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":278,"completed":162,"skipped":2796,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
S
------------------------------
[sig-cli] Kubectl client Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:20:33.023: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1672
[It] should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Jan 25 22:20:33.208: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-2681'
Jan 25 22:20:33.358: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 25 22:20:33.358: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n"
STEP: verifying the rc e2e-test-httpd-rc was created
Jan 25 22:20:33.380: INFO: Waiting for rc e2e-test-httpd-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
STEP: rolling-update to same image controller
Jan 25 22:20:33.401: INFO: scanned /root for discovery docs: 
Jan 25 22:20:33.401: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-httpd-rc --update-period=1s --image=docker.io/library/httpd:2.4.38-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-2681'
Jan 25 22:20:54.575: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Jan 25 22:20:54.575: INFO: stdout: "Created e2e-test-httpd-rc-0a7c25c174ad2be0b60540271694b43d\nScaling up e2e-test-httpd-rc-0a7c25c174ad2be0b60540271694b43d from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-0a7c25c174ad2be0b60540271694b43d up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-0a7c25c174ad2be0b60540271694b43d to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n"
Jan 25 22:20:54.575: INFO: stdout: "Created e2e-test-httpd-rc-0a7c25c174ad2be0b60540271694b43d\nScaling up e2e-test-httpd-rc-0a7c25c174ad2be0b60540271694b43d from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-0a7c25c174ad2be0b60540271694b43d up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-0a7c25c174ad2be0b60540271694b43d to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-httpd-rc pods to come up.
Jan 25 22:20:54.576: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-2681'
Jan 25 22:20:54.754: INFO: stderr: ""
Jan 25 22:20:54.754: INFO: stdout: "e2e-test-httpd-rc-0a7c25c174ad2be0b60540271694b43d-crsgc "
Jan 25 22:20:54.754: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-0a7c25c174ad2be0b60540271694b43d-crsgc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-httpd-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2681'
Jan 25 22:20:54.899: INFO: stderr: ""
Jan 25 22:20:54.900: INFO: stdout: "true"
Jan 25 22:20:54.900: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-0a7c25c174ad2be0b60540271694b43d-crsgc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-httpd-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2681'
Jan 25 22:20:55.043: INFO: stderr: ""
Jan 25 22:20:55.043: INFO: stdout: "docker.io/library/httpd:2.4.38-alpine"
Jan 25 22:20:55.043: INFO: e2e-test-httpd-rc-0a7c25c174ad2be0b60540271694b43d-crsgc is verified up and running
[AfterEach] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1678
Jan 25 22:20:55.043: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-2681'
Jan 25 22:20:55.184: INFO: stderr: ""
Jan 25 22:20:55.185: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:20:55.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2681" for this suite.

• [SLOW TEST:22.180 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1667
    should support rolling-update to same image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image  [Conformance]","total":278,"completed":163,"skipped":2797,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:20:55.204: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Jan 25 22:20:55.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-3726'
Jan 25 22:20:55.592: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 25 22:20:55.592: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n"
STEP: verifying the rc e2e-test-httpd-rc was created
STEP: verifying the pod controlled by rc e2e-test-httpd-rc was created
STEP: confirm that you can get logs from an rc
Jan 25 22:20:55.633: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-httpd-rc-2lfmp]
Jan 25 22:20:55.634: INFO: Waiting up to 5m0s for pod "e2e-test-httpd-rc-2lfmp" in namespace "kubectl-3726" to be "running and ready"
Jan 25 22:20:55.639: INFO: Pod "e2e-test-httpd-rc-2lfmp": Phase="Pending", Reason="", readiness=false. Elapsed: 5.815118ms
Jan 25 22:20:57.652: INFO: Pod "e2e-test-httpd-rc-2lfmp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018436844s
Jan 25 22:20:59.665: INFO: Pod "e2e-test-httpd-rc-2lfmp": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031705254s
Jan 25 22:21:01.674: INFO: Pod "e2e-test-httpd-rc-2lfmp": Phase="Pending", Reason="", readiness=false. Elapsed: 6.040687144s
Jan 25 22:21:03.683: INFO: Pod "e2e-test-httpd-rc-2lfmp": Phase="Pending", Reason="", readiness=false. Elapsed: 8.049296826s
Jan 25 22:21:05.696: INFO: Pod "e2e-test-httpd-rc-2lfmp": Phase="Running", Reason="", readiness=true. Elapsed: 10.062683054s
Jan 25 22:21:05.697: INFO: Pod "e2e-test-httpd-rc-2lfmp" satisfied condition "running and ready"
Jan 25 22:21:05.697: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-httpd-rc-2lfmp]
Jan 25 22:21:05.697: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-httpd-rc --namespace=kubectl-3726'
Jan 25 22:21:05.902: INFO: stderr: ""
Jan 25 22:21:05.902: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.44.0.2. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.44.0.2. Set the 'ServerName' directive globally to suppress this message\n[Sat Jan 25 22:21:04.064714 2020] [mpm_event:notice] [pid 1:tid 140509652413288] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Sat Jan 25 22:21:04.064799 2020] [core:notice] [pid 1:tid 140509652413288] AH00094: Command line: 'httpd -D FOREGROUND'\n"
[AfterEach] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617
Jan 25 22:21:05.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-3726'
Jan 25 22:21:06.051: INFO: stderr: ""
Jan 25 22:21:06.051: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:21:06.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3726" for this suite.

• [SLOW TEST:10.854 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1608
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run rc should create an rc from an image  [Conformance]","total":278,"completed":164,"skipped":2802,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate configmap [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:21:06.059: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 25 22:21:06.678: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 25 22:21:08.729: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715587666, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715587666, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715587666, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715587666, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 22:21:10.762: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715587666, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715587666, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715587666, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715587666, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 22:21:12.735: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715587666, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715587666, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715587666, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715587666, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 22:21:14.743: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715587666, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715587666, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715587666, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715587666, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 22:21:16.736: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715587666, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715587666, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715587666, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715587666, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 25 22:21:19.785: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate configmap [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the mutating configmap webhook via the AdmissionRegistration API
STEP: create a configmap that should be updated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:21:20.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-6039" for this suite.
STEP: Destroying namespace "webhook-6039-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:14.136 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate configmap [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":278,"completed":165,"skipped":2812,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:21:20.196: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Jan 25 22:21:20.316: INFO: Waiting up to 5m0s for pod "downward-api-746338d5-2584-4587-81a6-ea89b59f56d7" in namespace "downward-api-9291" to be "success or failure"
Jan 25 22:21:20.335: INFO: Pod "downward-api-746338d5-2584-4587-81a6-ea89b59f56d7": Phase="Pending", Reason="", readiness=false. Elapsed: 19.022179ms
Jan 25 22:21:22.346: INFO: Pod "downward-api-746338d5-2584-4587-81a6-ea89b59f56d7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030691684s
Jan 25 22:21:24.355: INFO: Pod "downward-api-746338d5-2584-4587-81a6-ea89b59f56d7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039390636s
Jan 25 22:21:26.363: INFO: Pod "downward-api-746338d5-2584-4587-81a6-ea89b59f56d7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047486469s
Jan 25 22:21:28.371: INFO: Pod "downward-api-746338d5-2584-4587-81a6-ea89b59f56d7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.055650721s
Jan 25 22:21:30.377: INFO: Pod "downward-api-746338d5-2584-4587-81a6-ea89b59f56d7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.061185603s
STEP: Saw pod success
Jan 25 22:21:30.377: INFO: Pod "downward-api-746338d5-2584-4587-81a6-ea89b59f56d7" satisfied condition "success or failure"
Jan 25 22:21:30.383: INFO: Trying to get logs from node jerma-node pod downward-api-746338d5-2584-4587-81a6-ea89b59f56d7 container dapi-container: 
STEP: delete the pod
Jan 25 22:21:30.458: INFO: Waiting for pod downward-api-746338d5-2584-4587-81a6-ea89b59f56d7 to disappear
Jan 25 22:21:30.479: INFO: Pod downward-api-746338d5-2584-4587-81a6-ea89b59f56d7 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:21:30.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9291" for this suite.

• [SLOW TEST:10.297 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":278,"completed":166,"skipped":2834,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
S
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:21:30.493: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W0125 22:22:00.956106       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 25 22:22:00.956: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:22:00.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-9542" for this suite.

• [SLOW TEST:30.475 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":278,"completed":167,"skipped":2835,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl version 
  should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:22:00.970: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[It] should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 25 22:22:01.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Jan 25 22:22:01.264: INFO: stderr: ""
Jan 25 22:22:01.264: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.0\", GitCommit:\"70132b0f130acc0bed193d9ba59dd186f0e634cf\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T16:10:40Z\", GoVersion:\"go1.13.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.0\", GitCommit:\"70132b0f130acc0bed193d9ba59dd186f0e634cf\", GitTreeState:\"clean\", BuildDate:\"2019-12-07T21:12:17Z\", GoVersion:\"go1.13.4\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:22:01.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4055" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed  [Conformance]","total":278,"completed":168,"skipped":2852,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:22:01.279: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap configmap-2539/configmap-test-dafce0de-8860-43cf-8763-da68af391eea
STEP: Creating a pod to test consume configMaps
Jan 25 22:22:01.416: INFO: Waiting up to 5m0s for pod "pod-configmaps-a0cc4e27-d4cf-4ad0-b824-d009676efee0" in namespace "configmap-2539" to be "success or failure"
Jan 25 22:22:01.421: INFO: Pod "pod-configmaps-a0cc4e27-d4cf-4ad0-b824-d009676efee0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.403901ms
Jan 25 22:22:03.432: INFO: Pod "pod-configmaps-a0cc4e27-d4cf-4ad0-b824-d009676efee0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015621425s
Jan 25 22:22:05.439: INFO: Pod "pod-configmaps-a0cc4e27-d4cf-4ad0-b824-d009676efee0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022104557s
Jan 25 22:22:07.456: INFO: Pod "pod-configmaps-a0cc4e27-d4cf-4ad0-b824-d009676efee0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039732746s
Jan 25 22:22:09.471: INFO: Pod "pod-configmaps-a0cc4e27-d4cf-4ad0-b824-d009676efee0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.054373059s
Jan 25 22:22:11.505: INFO: Pod "pod-configmaps-a0cc4e27-d4cf-4ad0-b824-d009676efee0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.088637416s
STEP: Saw pod success
Jan 25 22:22:11.505: INFO: Pod "pod-configmaps-a0cc4e27-d4cf-4ad0-b824-d009676efee0" satisfied condition "success or failure"
Jan 25 22:22:11.510: INFO: Trying to get logs from node jerma-node pod pod-configmaps-a0cc4e27-d4cf-4ad0-b824-d009676efee0 container env-test: 
STEP: delete the pod
Jan 25 22:22:11.547: INFO: Waiting for pod pod-configmaps-a0cc4e27-d4cf-4ad0-b824-d009676efee0 to disappear
Jan 25 22:22:11.557: INFO: Pod pod-configmaps-a0cc4e27-d4cf-4ad0-b824-d009676efee0 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:22:11.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2539" for this suite.

• [SLOW TEST:10.298 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":278,"completed":169,"skipped":2880,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
S
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  updates the published spec when one version gets renamed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:22:11.578: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates the published spec when one version gets renamed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: set up a multi version CRD
Jan 25 22:22:11.696: INFO: >>> kubeConfig: /root/.kube/config
STEP: rename a version
STEP: check the new version name is served
STEP: check the old version name is removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:22:31.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-4804" for this suite.

• [SLOW TEST:19.941 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  updates the published spec when one version gets renamed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":278,"completed":170,"skipped":2881,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:22:31.520: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Jan 25 22:22:31.898: INFO: Number of nodes with available pods: 0
Jan 25 22:22:31.898: INFO: Node jerma-node is running more than one daemon pod
Jan 25 22:22:32.939: INFO: Number of nodes with available pods: 0
Jan 25 22:22:32.939: INFO: Node jerma-node is running more than one daemon pod
Jan 25 22:22:34.410: INFO: Number of nodes with available pods: 0
Jan 25 22:22:34.410: INFO: Node jerma-node is running more than one daemon pod
Jan 25 22:22:34.918: INFO: Number of nodes with available pods: 0
Jan 25 22:22:34.918: INFO: Node jerma-node is running more than one daemon pod
Jan 25 22:22:35.910: INFO: Number of nodes with available pods: 0
Jan 25 22:22:35.910: INFO: Node jerma-node is running more than one daemon pod
Jan 25 22:22:37.195: INFO: Number of nodes with available pods: 0
Jan 25 22:22:37.196: INFO: Node jerma-node is running more than one daemon pod
Jan 25 22:22:39.016: INFO: Number of nodes with available pods: 0
Jan 25 22:22:39.016: INFO: Node jerma-node is running more than one daemon pod
Jan 25 22:22:40.205: INFO: Number of nodes with available pods: 0
Jan 25 22:22:40.206: INFO: Node jerma-node is running more than one daemon pod
Jan 25 22:22:40.914: INFO: Number of nodes with available pods: 0
Jan 25 22:22:40.914: INFO: Node jerma-node is running more than one daemon pod
Jan 25 22:22:41.920: INFO: Number of nodes with available pods: 2
Jan 25 22:22:41.920: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Stop a daemon pod, check that the daemon pod is revived.
Jan 25 22:22:41.961: INFO: Number of nodes with available pods: 1
Jan 25 22:22:41.962: INFO: Node jerma-node is running more than one daemon pod
Jan 25 22:22:43.001: INFO: Number of nodes with available pods: 1
Jan 25 22:22:43.002: INFO: Node jerma-node is running more than one daemon pod
Jan 25 22:22:43.978: INFO: Number of nodes with available pods: 1
Jan 25 22:22:43.978: INFO: Node jerma-node is running more than one daemon pod
Jan 25 22:22:44.976: INFO: Number of nodes with available pods: 1
Jan 25 22:22:44.976: INFO: Node jerma-node is running more than one daemon pod
Jan 25 22:22:45.974: INFO: Number of nodes with available pods: 1
Jan 25 22:22:45.974: INFO: Node jerma-node is running more than one daemon pod
Jan 25 22:22:46.979: INFO: Number of nodes with available pods: 1
Jan 25 22:22:46.979: INFO: Node jerma-node is running more than one daemon pod
Jan 25 22:22:47.974: INFO: Number of nodes with available pods: 1
Jan 25 22:22:47.974: INFO: Node jerma-node is running more than one daemon pod
Jan 25 22:22:48.981: INFO: Number of nodes with available pods: 1
Jan 25 22:22:48.981: INFO: Node jerma-node is running more than one daemon pod
Jan 25 22:22:49.979: INFO: Number of nodes with available pods: 1
Jan 25 22:22:49.980: INFO: Node jerma-node is running more than one daemon pod
Jan 25 22:22:50.969: INFO: Number of nodes with available pods: 1
Jan 25 22:22:50.970: INFO: Node jerma-node is running more than one daemon pod
Jan 25 22:22:51.975: INFO: Number of nodes with available pods: 1
Jan 25 22:22:51.975: INFO: Node jerma-node is running more than one daemon pod
Jan 25 22:22:52.973: INFO: Number of nodes with available pods: 1
Jan 25 22:22:52.973: INFO: Node jerma-node is running more than one daemon pod
Jan 25 22:22:53.975: INFO: Number of nodes with available pods: 1
Jan 25 22:22:53.975: INFO: Node jerma-node is running more than one daemon pod
Jan 25 22:22:54.976: INFO: Number of nodes with available pods: 1
Jan 25 22:22:54.976: INFO: Node jerma-node is running more than one daemon pod
Jan 25 22:22:55.977: INFO: Number of nodes with available pods: 1
Jan 25 22:22:55.977: INFO: Node jerma-node is running more than one daemon pod
Jan 25 22:22:56.975: INFO: Number of nodes with available pods: 1
Jan 25 22:22:56.975: INFO: Node jerma-node is running more than one daemon pod
Jan 25 22:22:57.976: INFO: Number of nodes with available pods: 1
Jan 25 22:22:57.977: INFO: Node jerma-node is running more than one daemon pod
Jan 25 22:22:58.973: INFO: Number of nodes with available pods: 1
Jan 25 22:22:58.973: INFO: Node jerma-node is running more than one daemon pod
Jan 25 22:22:59.983: INFO: Number of nodes with available pods: 2
Jan 25 22:22:59.984: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7655, will wait for the garbage collector to delete the pods
Jan 25 22:23:00.063: INFO: Deleting DaemonSet.extensions daemon-set took: 21.818026ms
Jan 25 22:23:00.465: INFO: Terminating DaemonSet.extensions daemon-set pods took: 402.483275ms
Jan 25 22:23:13.171: INFO: Number of nodes with available pods: 0
Jan 25 22:23:13.172: INFO: Number of running nodes: 0, number of available pods: 0
Jan 25 22:23:13.175: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7655/daemonsets","resourceVersion":"4340535"},"items":null}

Jan 25 22:23:13.179: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7655/pods","resourceVersion":"4340535"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:23:13.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-7655" for this suite.

• [SLOW TEST:41.713 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":278,"completed":171,"skipped":2924,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:23:13.235: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 25 22:23:13.390: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Jan 25 22:23:13.407: INFO: Number of nodes with available pods: 0
Jan 25 22:23:13.407: INFO: Node jerma-node is running more than one daemon pod
Jan 25 22:23:15.510: INFO: Number of nodes with available pods: 0
Jan 25 22:23:15.511: INFO: Node jerma-node is running more than one daemon pod
Jan 25 22:23:16.421: INFO: Number of nodes with available pods: 0
Jan 25 22:23:16.421: INFO: Node jerma-node is running more than one daemon pod
Jan 25 22:23:17.418: INFO: Number of nodes with available pods: 0
Jan 25 22:23:17.418: INFO: Node jerma-node is running more than one daemon pod
Jan 25 22:23:20.291: INFO: Number of nodes with available pods: 0
Jan 25 22:23:20.291: INFO: Node jerma-node is running more than one daemon pod
Jan 25 22:23:21.281: INFO: Number of nodes with available pods: 0
Jan 25 22:23:21.281: INFO: Node jerma-node is running more than one daemon pod
Jan 25 22:23:21.535: INFO: Number of nodes with available pods: 0
Jan 25 22:23:21.535: INFO: Node jerma-node is running more than one daemon pod
Jan 25 22:23:22.430: INFO: Number of nodes with available pods: 1
Jan 25 22:23:22.430: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 25 22:23:23.425: INFO: Number of nodes with available pods: 2
Jan 25 22:23:23.425: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Jan 25 22:23:23.477: INFO: Wrong image for pod: daemon-set-6mmv2. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 25 22:23:23.477: INFO: Wrong image for pod: daemon-set-987s8. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 25 22:23:24.492: INFO: Wrong image for pod: daemon-set-6mmv2. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 25 22:23:24.492: INFO: Wrong image for pod: daemon-set-987s8. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 25 22:23:25.893: INFO: Wrong image for pod: daemon-set-6mmv2. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 25 22:23:25.893: INFO: Wrong image for pod: daemon-set-987s8. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 25 22:23:26.580: INFO: Wrong image for pod: daemon-set-6mmv2. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 25 22:23:26.580: INFO: Wrong image for pod: daemon-set-987s8. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 25 22:23:27.867: INFO: Wrong image for pod: daemon-set-6mmv2. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 25 22:23:27.867: INFO: Wrong image for pod: daemon-set-987s8. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 25 22:23:28.494: INFO: Wrong image for pod: daemon-set-6mmv2. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 25 22:23:28.494: INFO: Wrong image for pod: daemon-set-987s8. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 25 22:23:29.491: INFO: Wrong image for pod: daemon-set-6mmv2. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 25 22:23:29.491: INFO: Wrong image for pod: daemon-set-987s8. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 25 22:23:29.491: INFO: Pod daemon-set-987s8 is not available
Jan 25 22:23:30.498: INFO: Wrong image for pod: daemon-set-6mmv2. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 25 22:23:30.499: INFO: Wrong image for pod: daemon-set-987s8. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 25 22:23:30.499: INFO: Pod daemon-set-987s8 is not available
Jan 25 22:23:31.496: INFO: Wrong image for pod: daemon-set-6mmv2. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 25 22:23:31.496: INFO: Wrong image for pod: daemon-set-987s8. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 25 22:23:31.496: INFO: Pod daemon-set-987s8 is not available
Jan 25 22:23:32.493: INFO: Wrong image for pod: daemon-set-6mmv2. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 25 22:23:32.494: INFO: Wrong image for pod: daemon-set-987s8. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 25 22:23:32.494: INFO: Pod daemon-set-987s8 is not available
Jan 25 22:23:33.491: INFO: Wrong image for pod: daemon-set-6mmv2. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 25 22:23:33.491: INFO: Pod daemon-set-vwx77 is not available
Jan 25 22:23:34.494: INFO: Wrong image for pod: daemon-set-6mmv2. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 25 22:23:34.494: INFO: Pod daemon-set-vwx77 is not available
Jan 25 22:23:35.497: INFO: Wrong image for pod: daemon-set-6mmv2. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 25 22:23:35.497: INFO: Pod daemon-set-vwx77 is not available
Jan 25 22:23:37.070: INFO: Wrong image for pod: daemon-set-6mmv2. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 25 22:23:37.070: INFO: Pod daemon-set-vwx77 is not available
Jan 25 22:23:37.503: INFO: Wrong image for pod: daemon-set-6mmv2. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 25 22:23:37.504: INFO: Pod daemon-set-vwx77 is not available
Jan 25 22:23:38.506: INFO: Wrong image for pod: daemon-set-6mmv2. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 25 22:23:38.506: INFO: Pod daemon-set-vwx77 is not available
Jan 25 22:23:39.492: INFO: Wrong image for pod: daemon-set-6mmv2. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 25 22:23:40.497: INFO: Wrong image for pod: daemon-set-6mmv2. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 25 22:23:41.515: INFO: Wrong image for pod: daemon-set-6mmv2. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 25 22:23:42.493: INFO: Wrong image for pod: daemon-set-6mmv2. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 25 22:23:43.492: INFO: Wrong image for pod: daemon-set-6mmv2. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 25 22:23:43.492: INFO: Pod daemon-set-6mmv2 is not available
Jan 25 22:23:44.490: INFO: Wrong image for pod: daemon-set-6mmv2. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 25 22:23:44.490: INFO: Pod daemon-set-6mmv2 is not available
Jan 25 22:23:45.491: INFO: Wrong image for pod: daemon-set-6mmv2. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 25 22:23:45.491: INFO: Pod daemon-set-6mmv2 is not available
Jan 25 22:23:46.493: INFO: Wrong image for pod: daemon-set-6mmv2. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 25 22:23:46.493: INFO: Pod daemon-set-6mmv2 is not available
Jan 25 22:23:47.493: INFO: Wrong image for pod: daemon-set-6mmv2. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 25 22:23:47.493: INFO: Pod daemon-set-6mmv2 is not available
Jan 25 22:23:48.494: INFO: Wrong image for pod: daemon-set-6mmv2. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 25 22:23:48.495: INFO: Pod daemon-set-6mmv2 is not available
Jan 25 22:23:49.496: INFO: Wrong image for pod: daemon-set-6mmv2. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 25 22:23:49.496: INFO: Pod daemon-set-6mmv2 is not available
Jan 25 22:23:50.493: INFO: Wrong image for pod: daemon-set-6mmv2. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 25 22:23:50.493: INFO: Pod daemon-set-6mmv2 is not available
Jan 25 22:23:51.491: INFO: Wrong image for pod: daemon-set-6mmv2. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 25 22:23:51.491: INFO: Pod daemon-set-6mmv2 is not available
Jan 25 22:23:52.517: INFO: Pod daemon-set-vrftq is not available
STEP: Check that daemon pods are still running on every node of the cluster.
Jan 25 22:23:52.572: INFO: Number of nodes with available pods: 1
Jan 25 22:23:52.572: INFO: Node jerma-node is running more than one daemon pod
Jan 25 22:23:53.628: INFO: Number of nodes with available pods: 1
Jan 25 22:23:53.629: INFO: Node jerma-node is running more than one daemon pod
Jan 25 22:23:54.599: INFO: Number of nodes with available pods: 1
Jan 25 22:23:54.600: INFO: Node jerma-node is running more than one daemon pod
Jan 25 22:23:55.599: INFO: Number of nodes with available pods: 1
Jan 25 22:23:55.599: INFO: Node jerma-node is running more than one daemon pod
Jan 25 22:23:56.582: INFO: Number of nodes with available pods: 1
Jan 25 22:23:56.582: INFO: Node jerma-node is running more than one daemon pod
Jan 25 22:23:57.587: INFO: Number of nodes with available pods: 1
Jan 25 22:23:57.587: INFO: Node jerma-node is running more than one daemon pod
Jan 25 22:23:58.638: INFO: Number of nodes with available pods: 2
Jan 25 22:23:58.638: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1003, will wait for the garbage collector to delete the pods
Jan 25 22:23:58.723: INFO: Deleting DaemonSet.extensions daemon-set took: 9.487413ms
Jan 25 22:23:59.124: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.851868ms
Jan 25 22:24:13.130: INFO: Number of nodes with available pods: 0
Jan 25 22:24:13.130: INFO: Number of running nodes: 0, number of available pods: 0
Jan 25 22:24:13.133: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1003/daemonsets","resourceVersion":"4340768"},"items":null}

Jan 25 22:24:13.138: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1003/pods","resourceVersion":"4340768"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:24:13.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-1003" for this suite.

• [SLOW TEST:59.954 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":278,"completed":172,"skipped":2954,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:24:13.190: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ReplicationController
STEP: Ensuring resource quota status captures replication controller creation
STEP: Deleting a ReplicationController
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:24:24.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-1098" for this suite.

• [SLOW TEST:11.398 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":278,"completed":173,"skipped":2954,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:24:24.588: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jan 25 22:24:24.701: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b59a578d-13b9-49c8-93a7-859d2e52a308" in namespace "downward-api-4430" to be "success or failure"
Jan 25 22:24:24.727: INFO: Pod "downwardapi-volume-b59a578d-13b9-49c8-93a7-859d2e52a308": Phase="Pending", Reason="", readiness=false. Elapsed: 25.623501ms
Jan 25 22:24:26.735: INFO: Pod "downwardapi-volume-b59a578d-13b9-49c8-93a7-859d2e52a308": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033618683s
Jan 25 22:24:28.741: INFO: Pod "downwardapi-volume-b59a578d-13b9-49c8-93a7-859d2e52a308": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039315772s
Jan 25 22:24:30.747: INFO: Pod "downwardapi-volume-b59a578d-13b9-49c8-93a7-859d2e52a308": Phase="Pending", Reason="", readiness=false. Elapsed: 6.045855644s
Jan 25 22:24:32.753: INFO: Pod "downwardapi-volume-b59a578d-13b9-49c8-93a7-859d2e52a308": Phase="Pending", Reason="", readiness=false. Elapsed: 8.051643194s
Jan 25 22:24:34.763: INFO: Pod "downwardapi-volume-b59a578d-13b9-49c8-93a7-859d2e52a308": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.061538929s
STEP: Saw pod success
Jan 25 22:24:34.763: INFO: Pod "downwardapi-volume-b59a578d-13b9-49c8-93a7-859d2e52a308" satisfied condition "success or failure"
Jan 25 22:24:34.768: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-b59a578d-13b9-49c8-93a7-859d2e52a308 container client-container: 
STEP: delete the pod
Jan 25 22:24:34.831: INFO: Waiting for pod downwardapi-volume-b59a578d-13b9-49c8-93a7-859d2e52a308 to disappear
Jan 25 22:24:34.835: INFO: Pod downwardapi-volume-b59a578d-13b9-49c8-93a7-859d2e52a308 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:24:34.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4430" for this suite.

• [SLOW TEST:10.292 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":174,"skipped":2954,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:24:34.883: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:24:43.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-9546" for this suite.

• [SLOW TEST:8.182 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187
    should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":175,"skipped":2969,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should include webhook resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:24:43.065: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 25 22:24:44.167: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 25 22:24:46.186: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715587884, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715587884, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715587884, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715587884, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 22:24:48.191: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715587884, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715587884, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715587884, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715587884, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 22:24:50.193: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715587884, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715587884, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715587884, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715587884, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 22:24:52.194: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715587884, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715587884, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715587884, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715587884, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 25 22:24:55.262: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should include webhook resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: fetching the /apis discovery document
STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document
STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document
STEP: fetching the /apis/admissionregistration.k8s.io discovery document
STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document
STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document
STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:24:55.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3451" for this suite.
STEP: Destroying namespace "webhook-3451-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:12.522 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should include webhook resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":278,"completed":176,"skipped":2982,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny custom resource creation, update and deletion [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:24:55.588: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 25 22:24:56.294: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 25 22:24:58.326: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715587896, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715587896, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715587896, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715587896, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 22:25:00.339: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715587896, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715587896, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715587896, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715587896, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 22:25:02.332: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715587896, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715587896, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715587896, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715587896, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 25 22:25:05.420: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny custom resource creation, update and deletion [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 25 22:25:05.430: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the custom resource webhook via the AdmissionRegistration API
STEP: Creating a custom resource that should be denied by the webhook
STEP: Creating a custom resource whose deletion would be denied by the webhook
STEP: Updating the custom resource with disallowed data should be denied
STEP: Deleting the custom resource should be denied
STEP: Remove the offending key and value from the custom resource data
STEP: Deleting the updated custom resource should be successful
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:25:06.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-8851" for this suite.
STEP: Destroying namespace "webhook-8851-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:11.277 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny custom resource creation, update and deletion [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":278,"completed":177,"skipped":3014,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny pod and configmap creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:25:06.869: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 25 22:25:07.508: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 25 22:25:09.521: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715587907, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715587907, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715587907, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715587907, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 22:25:11.527: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715587907, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715587907, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715587907, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715587907, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 22:25:13.528: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715587907, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715587907, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715587907, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715587907, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 22:25:15.530: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715587907, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715587907, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715587907, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715587907, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 25 22:25:18.578: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny pod and configmap creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the webhook via the AdmissionRegistration API
STEP: create a pod that should be denied by the webhook
STEP: create a pod that causes the webhook to hang
STEP: create a configmap that should be denied by the webhook
STEP: create a configmap that should be admitted by the webhook
STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook
STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook
STEP: create a namespace that bypass the webhook
STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:25:28.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-6058" for this suite.
STEP: Destroying namespace "webhook-6058-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:22.262 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny pod and configmap creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":278,"completed":178,"skipped":3089,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate pod and apply defaults after mutation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:25:29.131: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 25 22:25:29.891: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 25 22:25:31.903: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715587929, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715587929, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715587930, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715587929, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 22:25:33.913: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715587929, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715587929, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715587930, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715587929, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 22:25:35.910: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715587929, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715587929, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715587930, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715587929, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 22:25:37.914: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715587929, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715587929, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715587930, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715587929, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 22:25:39.919: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715587929, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715587929, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715587930, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715587929, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 25 22:25:42.943: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate pod and apply defaults after mutation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the mutating pod webhook via the AdmissionRegistration API
STEP: create a pod that should be updated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:25:43.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7633" for this suite.
STEP: Destroying namespace "webhook-7633-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:14.079 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate pod and apply defaults after mutation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":278,"completed":179,"skipped":3116,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:25:43.211: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Jan 25 22:25:43.350: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan 25 22:25:43.378: INFO: Waiting for terminating namespaces to be deleted...
Jan 25 22:25:43.391: INFO: 
Logging pods the kubelet thinks is on node jerma-node before test
Jan 25 22:25:43.403: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded)
Jan 25 22:25:43.403: INFO: 	Container weave ready: true, restart count 1
Jan 25 22:25:43.403: INFO: 	Container weave-npc ready: true, restart count 0
Jan 25 22:25:43.403: INFO: webhook-to-be-mutated from webhook-7633 started at 2020-01-25 22:25:43 +0000 UTC (1 container statuses recorded)
Jan 25 22:25:43.403: INFO: 	Container example ready: false, restart count 0
Jan 25 22:25:43.403: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded)
Jan 25 22:25:43.403: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 25 22:25:43.403: INFO: 
Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test
Jan 25 22:25:43.424: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Jan 25 22:25:43.424: INFO: 	Container kube-apiserver ready: true, restart count 1
Jan 25 22:25:43.424: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Jan 25 22:25:43.424: INFO: 	Container etcd ready: true, restart count 1
Jan 25 22:25:43.424: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Jan 25 22:25:43.424: INFO: 	Container coredns ready: true, restart count 0
Jan 25 22:25:43.424: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Jan 25 22:25:43.424: INFO: 	Container coredns ready: true, restart count 0
Jan 25 22:25:43.424: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded)
Jan 25 22:25:43.424: INFO: 	Container weave ready: true, restart count 0
Jan 25 22:25:43.424: INFO: 	Container weave-npc ready: true, restart count 0
Jan 25 22:25:43.424: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Jan 25 22:25:43.424: INFO: 	Container kube-controller-manager ready: true, restart count 3
Jan 25 22:25:43.424: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded)
Jan 25 22:25:43.424: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 25 22:25:43.424: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Jan 25 22:25:43.424: INFO: 	Container kube-scheduler ready: true, restart count 4
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-5f883407-74fa-4192-9cb9-4c259c2c42a2 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-5f883407-74fa-4192-9cb9-4c259c2c42a2 off the node jerma-node
STEP: verifying the node doesn't have the label kubernetes.io/e2e-5f883407-74fa-4192-9cb9-4c259c2c42a2
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:26:07.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-4899" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77

• [SLOW TEST:24.449 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching  [Conformance]","total":278,"completed":180,"skipped":3116,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:26:07.660: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jan 25 22:26:07.772: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6c5bf405-92f8-4367-ac4a-dde2d128189b" in namespace "projected-6964" to be "success or failure"
Jan 25 22:26:07.782: INFO: Pod "downwardapi-volume-6c5bf405-92f8-4367-ac4a-dde2d128189b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.06836ms
Jan 25 22:26:09.790: INFO: Pod "downwardapi-volume-6c5bf405-92f8-4367-ac4a-dde2d128189b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017370626s
Jan 25 22:26:11.799: INFO: Pod "downwardapi-volume-6c5bf405-92f8-4367-ac4a-dde2d128189b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026936144s
Jan 25 22:26:13.813: INFO: Pod "downwardapi-volume-6c5bf405-92f8-4367-ac4a-dde2d128189b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.040819622s
Jan 25 22:26:15.824: INFO: Pod "downwardapi-volume-6c5bf405-92f8-4367-ac4a-dde2d128189b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.052053794s
Jan 25 22:26:17.833: INFO: Pod "downwardapi-volume-6c5bf405-92f8-4367-ac4a-dde2d128189b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.060542927s
STEP: Saw pod success
Jan 25 22:26:17.833: INFO: Pod "downwardapi-volume-6c5bf405-92f8-4367-ac4a-dde2d128189b" satisfied condition "success or failure"
Jan 25 22:26:17.836: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-6c5bf405-92f8-4367-ac4a-dde2d128189b container client-container: 
STEP: delete the pod
Jan 25 22:26:18.010: INFO: Waiting for pod downwardapi-volume-6c5bf405-92f8-4367-ac4a-dde2d128189b to disappear
Jan 25 22:26:18.016: INFO: Pod downwardapi-volume-6c5bf405-92f8-4367-ac4a-dde2d128189b no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:26:18.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6964" for this suite.

• [SLOW TEST:10.371 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":181,"skipped":3119,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:26:18.032: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1444
STEP: creating an pod
Jan 25 22:26:18.281: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run logs-generator --generator=run-pod/v1 --image=gcr.io/kubernetes-e2e-test-images/agnhost:2.8 --namespace=kubectl-6607 -- logs-generator --log-lines-total 100 --run-duration 20s'
Jan 25 22:26:21.087: INFO: stderr: ""
Jan 25 22:26:21.087: INFO: stdout: "pod/logs-generator created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Waiting for log generator to start.
Jan 25 22:26:21.088: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator]
Jan 25 22:26:21.088: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-6607" to be "running and ready, or succeeded"
Jan 25 22:26:21.094: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 6.502979ms
Jan 25 22:26:23.099: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010936398s
Jan 25 22:26:25.107: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 4.018590681s
Jan 25 22:26:27.114: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 6.026412053s
Jan 25 22:26:29.122: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 8.03414802s
Jan 25 22:26:29.122: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded"
Jan 25 22:26:29.122: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator]
STEP: checking for a matching strings
Jan 25 22:26:29.123: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6607'
Jan 25 22:26:29.353: INFO: stderr: ""
Jan 25 22:26:29.353: INFO: stdout: "I0125 22:26:27.384154       1 logs_generator.go:76] 0 PUT /api/v1/namespaces/ns/pods/8p9 522\nI0125 22:26:27.584667       1 logs_generator.go:76] 1 GET /api/v1/namespaces/ns/pods/fnq 365\nI0125 22:26:27.785169       1 logs_generator.go:76] 2 POST /api/v1/namespaces/kube-system/pods/jsb6 348\nI0125 22:26:27.984657       1 logs_generator.go:76] 3 GET /api/v1/namespaces/default/pods/cqh 209\nI0125 22:26:28.184549       1 logs_generator.go:76] 4 POST /api/v1/namespaces/default/pods/tdw 507\nI0125 22:26:28.384794       1 logs_generator.go:76] 5 PUT /api/v1/namespaces/ns/pods/l525 523\nI0125 22:26:28.592208       1 logs_generator.go:76] 6 GET /api/v1/namespaces/default/pods/wsjh 577\nI0125 22:26:28.784494       1 logs_generator.go:76] 7 POST /api/v1/namespaces/default/pods/4lz 437\nI0125 22:26:28.984526       1 logs_generator.go:76] 8 POST /api/v1/namespaces/default/pods/s9x 385\nI0125 22:26:29.184789       1 logs_generator.go:76] 9 GET /api/v1/namespaces/ns/pods/wkx 349\n"
STEP: limiting log lines
Jan 25 22:26:29.353: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6607 --tail=1'
Jan 25 22:26:29.499: INFO: stderr: ""
Jan 25 22:26:29.499: INFO: stdout: "I0125 22:26:29.384438       1 logs_generator.go:76] 10 POST /api/v1/namespaces/ns/pods/tvpt 571\n"
Jan 25 22:26:29.500: INFO: got output "I0125 22:26:29.384438       1 logs_generator.go:76] 10 POST /api/v1/namespaces/ns/pods/tvpt 571\n"
STEP: limiting log bytes
Jan 25 22:26:29.500: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6607 --limit-bytes=1'
Jan 25 22:26:29.627: INFO: stderr: ""
Jan 25 22:26:29.627: INFO: stdout: "I"
Jan 25 22:26:29.627: INFO: got output "I"
STEP: exposing timestamps
Jan 25 22:26:29.628: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6607 --tail=1 --timestamps'
Jan 25 22:26:29.722: INFO: stderr: ""
Jan 25 22:26:29.722: INFO: stdout: "2020-01-25T22:26:29.588117531Z I0125 22:26:29.585458       1 logs_generator.go:76] 11 POST /api/v1/namespaces/default/pods/mz7 334\n"
Jan 25 22:26:29.722: INFO: got output "2020-01-25T22:26:29.588117531Z I0125 22:26:29.585458       1 logs_generator.go:76] 11 POST /api/v1/namespaces/default/pods/mz7 334\n"
STEP: restricting to a time range
Jan 25 22:26:32.222: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6607 --since=1s'
Jan 25 22:26:32.340: INFO: stderr: ""
Jan 25 22:26:32.341: INFO: stdout: "I0125 22:26:31.384790       1 logs_generator.go:76] 20 PUT /api/v1/namespaces/kube-system/pods/zqhw 419\nI0125 22:26:31.584550       1 logs_generator.go:76] 21 PUT /api/v1/namespaces/default/pods/9t7k 448\nI0125 22:26:31.784542       1 logs_generator.go:76] 22 POST /api/v1/namespaces/default/pods/p9f 562\nI0125 22:26:31.984565       1 logs_generator.go:76] 23 GET /api/v1/namespaces/kube-system/pods/kc89 437\nI0125 22:26:32.184884       1 logs_generator.go:76] 24 PUT /api/v1/namespaces/ns/pods/nmv7 416\n"
Jan 25 22:26:32.341: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6607 --since=24h'
Jan 25 22:26:32.476: INFO: stderr: ""
Jan 25 22:26:32.476: INFO: stdout: "I0125 22:26:27.384154       1 logs_generator.go:76] 0 PUT /api/v1/namespaces/ns/pods/8p9 522\nI0125 22:26:27.584667       1 logs_generator.go:76] 1 GET /api/v1/namespaces/ns/pods/fnq 365\nI0125 22:26:27.785169       1 logs_generator.go:76] 2 POST /api/v1/namespaces/kube-system/pods/jsb6 348\nI0125 22:26:27.984657       1 logs_generator.go:76] 3 GET /api/v1/namespaces/default/pods/cqh 209\nI0125 22:26:28.184549       1 logs_generator.go:76] 4 POST /api/v1/namespaces/default/pods/tdw 507\nI0125 22:26:28.384794       1 logs_generator.go:76] 5 PUT /api/v1/namespaces/ns/pods/l525 523\nI0125 22:26:28.592208       1 logs_generator.go:76] 6 GET /api/v1/namespaces/default/pods/wsjh 577\nI0125 22:26:28.784494       1 logs_generator.go:76] 7 POST /api/v1/namespaces/default/pods/4lz 437\nI0125 22:26:28.984526       1 logs_generator.go:76] 8 POST /api/v1/namespaces/default/pods/s9x 385\nI0125 22:26:29.184789       1 logs_generator.go:76] 9 GET /api/v1/namespaces/ns/pods/wkx 349\nI0125 22:26:29.384438       1 logs_generator.go:76] 10 POST /api/v1/namespaces/ns/pods/tvpt 571\nI0125 22:26:29.585458       1 logs_generator.go:76] 11 POST /api/v1/namespaces/default/pods/mz7 334\nI0125 22:26:29.784424       1 logs_generator.go:76] 12 GET /api/v1/namespaces/kube-system/pods/hp85 341\nI0125 22:26:29.984407       1 logs_generator.go:76] 13 PUT /api/v1/namespaces/kube-system/pods/6vks 215\nI0125 22:26:30.184533       1 logs_generator.go:76] 14 GET /api/v1/namespaces/default/pods/gqc 201\nI0125 22:26:30.384549       1 logs_generator.go:76] 15 POST /api/v1/namespaces/ns/pods/55j 451\nI0125 22:26:30.584589       1 logs_generator.go:76] 16 GET /api/v1/namespaces/default/pods/mkw7 394\nI0125 22:26:30.784448       1 logs_generator.go:76] 17 POST /api/v1/namespaces/ns/pods/q9b 350\nI0125 22:26:30.984433       1 logs_generator.go:76] 18 POST /api/v1/namespaces/default/pods/8kx 547\nI0125 22:26:31.184511       1 logs_generator.go:76] 19 GET /api/v1/namespaces/ns/pods/hpwp 453\nI0125 22:26:31.384790       1 logs_generator.go:76] 20 PUT /api/v1/namespaces/kube-system/pods/zqhw 419\nI0125 22:26:31.584550       1 logs_generator.go:76] 21 PUT /api/v1/namespaces/default/pods/9t7k 448\nI0125 22:26:31.784542       1 logs_generator.go:76] 22 POST /api/v1/namespaces/default/pods/p9f 562\nI0125 22:26:31.984565       1 logs_generator.go:76] 23 GET /api/v1/namespaces/kube-system/pods/kc89 437\nI0125 22:26:32.184884       1 logs_generator.go:76] 24 PUT /api/v1/namespaces/ns/pods/nmv7 416\nI0125 22:26:32.384367       1 logs_generator.go:76] 25 POST /api/v1/namespaces/kube-system/pods/l6s 309\n"
[AfterEach] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1450
Jan 25 22:26:32.477: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-6607'
Jan 25 22:26:37.028: INFO: stderr: ""
Jan 25 22:26:37.029: INFO: stdout: "pod \"logs-generator\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:26:37.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6607" for this suite.

• [SLOW TEST:19.104 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1440
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","total":278,"completed":182,"skipped":3129,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:26:37.136: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:26:42.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-8171" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":278,"completed":183,"skipped":3137,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:26:42.110: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:26:42.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-940" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143
•{"msg":"PASSED [sig-network] Services should provide secure master service  [Conformance]","total":278,"completed":184,"skipped":3147,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:26:42.245: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jan 25 22:26:42.346: INFO: Waiting up to 5m0s for pod "pod-c829dfd5-ae30-4a33-92eb-93727235ff95" in namespace "emptydir-1701" to be "success or failure"
Jan 25 22:26:42.367: INFO: Pod "pod-c829dfd5-ae30-4a33-92eb-93727235ff95": Phase="Pending", Reason="", readiness=false. Elapsed: 20.571039ms
Jan 25 22:26:44.372: INFO: Pod "pod-c829dfd5-ae30-4a33-92eb-93727235ff95": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025592381s
Jan 25 22:26:46.377: INFO: Pod "pod-c829dfd5-ae30-4a33-92eb-93727235ff95": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030473423s
Jan 25 22:26:48.444: INFO: Pod "pod-c829dfd5-ae30-4a33-92eb-93727235ff95": Phase="Pending", Reason="", readiness=false. Elapsed: 6.097539884s
Jan 25 22:26:50.455: INFO: Pod "pod-c829dfd5-ae30-4a33-92eb-93727235ff95": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.108409559s
STEP: Saw pod success
Jan 25 22:26:50.455: INFO: Pod "pod-c829dfd5-ae30-4a33-92eb-93727235ff95" satisfied condition "success or failure"
Jan 25 22:26:50.463: INFO: Trying to get logs from node jerma-node pod pod-c829dfd5-ae30-4a33-92eb-93727235ff95 container test-container: 
STEP: delete the pod
Jan 25 22:26:50.549: INFO: Waiting for pod pod-c829dfd5-ae30-4a33-92eb-93727235ff95 to disappear
Jan 25 22:26:50.554: INFO: Pod pod-c829dfd5-ae30-4a33-92eb-93727235ff95 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:26:50.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1701" for this suite.

• [SLOW TEST:8.322 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":185,"skipped":3159,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:26:50.570: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4139 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-4139;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4139 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-4139;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4139.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-4139.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4139.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-4139.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4139.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-4139.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4139.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-4139.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4139.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-4139.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4139.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-4139.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4139.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 123.118.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.118.123_udp@PTR;check="$$(dig +tcp +noall +answer +search 123.118.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.118.123_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4139 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-4139;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4139 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-4139;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4139.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-4139.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4139.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-4139.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4139.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-4139.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4139.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-4139.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4139.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-4139.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4139.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-4139.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4139.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 123.118.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.118.123_udp@PTR;check="$$(dig +tcp +noall +answer +search 123.118.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.118.123_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 25 22:27:03.094: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4139/dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a: the server could not find the requested resource (get pods dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a)
Jan 25 22:27:03.109: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4139/dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a: the server could not find the requested resource (get pods dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a)
Jan 25 22:27:03.116: INFO: Unable to read wheezy_udp@dns-test-service.dns-4139 from pod dns-4139/dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a: the server could not find the requested resource (get pods dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a)
Jan 25 22:27:03.120: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4139 from pod dns-4139/dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a: the server could not find the requested resource (get pods dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a)
Jan 25 22:27:03.124: INFO: Unable to read wheezy_udp@dns-test-service.dns-4139.svc from pod dns-4139/dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a: the server could not find the requested resource (get pods dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a)
Jan 25 22:27:03.128: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4139.svc from pod dns-4139/dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a: the server could not find the requested resource (get pods dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a)
Jan 25 22:27:03.133: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4139.svc from pod dns-4139/dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a: the server could not find the requested resource (get pods dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a)
Jan 25 22:27:03.138: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4139.svc from pod dns-4139/dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a: the server could not find the requested resource (get pods dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a)
Jan 25 22:27:03.169: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4139/dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a: the server could not find the requested resource (get pods dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a)
Jan 25 22:27:03.172: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4139/dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a: the server could not find the requested resource (get pods dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a)
Jan 25 22:27:03.176: INFO: Unable to read jessie_udp@dns-test-service.dns-4139 from pod dns-4139/dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a: the server could not find the requested resource (get pods dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a)
Jan 25 22:27:03.180: INFO: Unable to read jessie_tcp@dns-test-service.dns-4139 from pod dns-4139/dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a: the server could not find the requested resource (get pods dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a)
Jan 25 22:27:03.183: INFO: Unable to read jessie_udp@dns-test-service.dns-4139.svc from pod dns-4139/dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a: the server could not find the requested resource (get pods dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a)
Jan 25 22:27:03.186: INFO: Unable to read jessie_tcp@dns-test-service.dns-4139.svc from pod dns-4139/dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a: the server could not find the requested resource (get pods dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a)
Jan 25 22:27:03.190: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4139.svc from pod dns-4139/dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a: the server could not find the requested resource (get pods dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a)
Jan 25 22:27:03.193: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4139.svc from pod dns-4139/dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a: the server could not find the requested resource (get pods dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a)
Jan 25 22:27:03.215: INFO: Lookups using dns-4139/dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4139 wheezy_tcp@dns-test-service.dns-4139 wheezy_udp@dns-test-service.dns-4139.svc wheezy_tcp@dns-test-service.dns-4139.svc wheezy_udp@_http._tcp.dns-test-service.dns-4139.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4139.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4139 jessie_tcp@dns-test-service.dns-4139 jessie_udp@dns-test-service.dns-4139.svc jessie_tcp@dns-test-service.dns-4139.svc jessie_udp@_http._tcp.dns-test-service.dns-4139.svc jessie_tcp@_http._tcp.dns-test-service.dns-4139.svc]

Jan 25 22:27:08.226: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4139/dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a: the server could not find the requested resource (get pods dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a)
Jan 25 22:27:08.231: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4139/dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a: the server could not find the requested resource (get pods dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a)
Jan 25 22:27:08.235: INFO: Unable to read wheezy_udp@dns-test-service.dns-4139 from pod dns-4139/dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a: the server could not find the requested resource (get pods dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a)
Jan 25 22:27:08.240: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4139 from pod dns-4139/dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a: the server could not find the requested resource (get pods dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a)
Jan 25 22:27:08.245: INFO: Unable to read wheezy_udp@dns-test-service.dns-4139.svc from pod dns-4139/dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a: the server could not find the requested resource (get pods dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a)
Jan 25 22:27:08.250: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4139.svc from pod dns-4139/dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a: the server could not find the requested resource (get pods dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a)
Jan 25 22:27:08.256: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4139.svc from pod dns-4139/dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a: the server could not find the requested resource (get pods dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a)
Jan 25 22:27:08.261: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4139.svc from pod dns-4139/dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a: the server could not find the requested resource (get pods dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a)
Jan 25 22:27:08.291: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4139/dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a: the server could not find the requested resource (get pods dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a)
Jan 25 22:27:08.294: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4139/dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a: the server could not find the requested resource (get pods dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a)
Jan 25 22:27:08.298: INFO: Unable to read jessie_udp@dns-test-service.dns-4139 from pod dns-4139/dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a: the server could not find the requested resource (get pods dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a)
Jan 25 22:27:08.301: INFO: Unable to read jessie_tcp@dns-test-service.dns-4139 from pod dns-4139/dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a: the server could not find the requested resource (get pods dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a)
Jan 25 22:27:08.305: INFO: Unable to read jessie_udp@dns-test-service.dns-4139.svc from pod dns-4139/dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a: the server could not find the requested resource (get pods dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a)
Jan 25 22:27:08.311: INFO: Unable to read jessie_tcp@dns-test-service.dns-4139.svc from pod dns-4139/dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a: the server could not find the requested resource (get pods dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a)
Jan 25 22:27:08.318: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4139.svc from pod dns-4139/dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a: the server could not find the requested resource (get pods dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a)
Jan 25 22:27:08.323: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4139.svc from pod dns-4139/dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a: the server could not find the requested resource (get pods dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a)
Jan 25 22:27:08.344: INFO: Lookups using dns-4139/dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4139 wheezy_tcp@dns-test-service.dns-4139 wheezy_udp@dns-test-service.dns-4139.svc wheezy_tcp@dns-test-service.dns-4139.svc wheezy_udp@_http._tcp.dns-test-service.dns-4139.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4139.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4139 jessie_tcp@dns-test-service.dns-4139 jessie_udp@dns-test-service.dns-4139.svc jessie_tcp@dns-test-service.dns-4139.svc jessie_udp@_http._tcp.dns-test-service.dns-4139.svc jessie_tcp@_http._tcp.dns-test-service.dns-4139.svc]

Jan 25 22:27:13.224: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4139/dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a: the server could not find the requested resource (get pods dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a)
Jan 25 22:27:13.233: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4139/dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a: the server could not find the requested resource (get pods dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a)
Jan 25 22:27:13.239: INFO: Unable to read wheezy_udp@dns-test-service.dns-4139 from pod dns-4139/dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a: the server could not find the requested resource (get pods dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a)
Jan 25 22:27:13.243: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4139 from pod dns-4139/dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a: the server could not find the requested resource (get pods dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a)
Jan 25 22:27:13.250: INFO: Unable to read wheezy_udp@dns-test-service.dns-4139.svc from pod dns-4139/dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a: the server could not find the requested resource (get pods dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a)
Jan 25 22:27:13.255: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4139.svc from pod dns-4139/dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a: the server could not find the requested resource (get pods dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a)
Jan 25 22:27:13.263: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4139.svc from pod dns-4139/dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a: the server could not find the requested resource (get pods dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a)
Jan 25 22:27:13.268: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4139.svc from pod dns-4139/dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a: the server could not find the requested resource (get pods dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a)
Jan 25 22:27:13.318: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4139/dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a: the server could not find the requested resource (get pods dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a)
Jan 25 22:27:13.323: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4139/dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a: the server could not find the requested resource (get pods dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a)
Jan 25 22:27:13.327: INFO: Unable to read jessie_udp@dns-test-service.dns-4139 from pod dns-4139/dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a: the server could not find the requested resource (get pods dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a)
Jan 25 22:27:13.332: INFO: Unable to read jessie_tcp@dns-test-service.dns-4139 from pod dns-4139/dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a: the server could not find the requested resource (get pods dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a)
Jan 25 22:27:13.340: INFO: Unable to read jessie_udp@dns-test-service.dns-4139.svc from pod dns-4139/dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a: the server could not find the requested resource (get pods dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a)
Jan 25 22:27:13.344: INFO: Unable to read jessie_tcp@dns-test-service.dns-4139.svc from pod dns-4139/dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a: the server could not find the requested resource (get pods dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a)
Jan 25 22:27:13.351: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4139.svc from pod dns-4139/dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a: the server could not find the requested resource (get pods dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a)
Jan 25 22:27:13.355: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4139.svc from pod dns-4139/dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a: the server could not find the requested resource (get pods dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a)
Jan 25 22:27:13.400: INFO: Lookups using dns-4139/dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4139 wheezy_tcp@dns-test-service.dns-4139 wheezy_udp@dns-test-service.dns-4139.svc wheezy_tcp@dns-test-service.dns-4139.svc wheezy_udp@_http._tcp.dns-test-service.dns-4139.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4139.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4139 jessie_tcp@dns-test-service.dns-4139 jessie_udp@dns-test-service.dns-4139.svc jessie_tcp@dns-test-service.dns-4139.svc jessie_udp@_http._tcp.dns-test-service.dns-4139.svc jessie_tcp@_http._tcp.dns-test-service.dns-4139.svc]

Jan 25 22:27:18.229: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4139/dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a: the server could not find the requested resource (get pods dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a)
Jan 25 22:27:18.240: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4139/dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a: the server could not find the requested resource (get pods dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a)
Jan 25 22:27:18.245: INFO: Unable to read wheezy_udp@dns-test-service.dns-4139 from pod dns-4139/dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a: the server could not find the requested resource (get pods dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a)
Jan 25 22:27:18.250: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4139 from pod dns-4139/dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a: the server could not find the requested resource (get pods dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a)
Jan 25 22:27:18.254: INFO: Unable to read wheezy_udp@dns-test-service.dns-4139.svc from pod dns-4139/dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a: the server could not find the requested resource (get pods dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a)
Jan 25 22:27:18.260: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4139.svc from pod dns-4139/dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a: the server could not find the requested resource (get pods dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a)
Jan 25 22:27:18.264: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4139.svc from pod dns-4139/dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a: the server could not find the requested resource (get pods dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a)
Jan 25 22:27:18.269: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4139.svc from pod dns-4139/dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a: the server could not find the requested resource (get pods dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a)
Jan 25 22:27:18.318: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4139/dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a: the server could not find the requested resource (get pods dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a)
Jan 25 22:27:18.327: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4139/dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a: the server could not find the requested resource (get pods dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a)
Jan 25 22:27:18.332: INFO: Unable to read jessie_udp@dns-test-service.dns-4139 from pod dns-4139/dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a: the server could not find the requested resource (get pods dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a)
Jan 25 22:27:18.336: INFO: Unable to read jessie_tcp@dns-test-service.dns-4139 from pod dns-4139/dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a: the server could not find the requested resource (get pods dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a)
Jan 25 22:27:18.340: INFO: Unable to read jessie_udp@dns-test-service.dns-4139.svc from pod dns-4139/dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a: the server could not find the requested resource (get pods dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a)
Jan 25 22:27:18.344: INFO: Unable to read jessie_tcp@dns-test-service.dns-4139.svc from pod dns-4139/dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a: the server could not find the requested resource (get pods dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a)
Jan 25 22:27:18.348: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4139.svc from pod dns-4139/dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a: the server could not find the requested resource (get pods dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a)
Jan 25 22:27:18.352: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4139.svc from pod dns-4139/dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a: the server could not find the requested resource (get pods dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a)
Jan 25 22:27:18.376: INFO: Lookups using dns-4139/dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4139 wheezy_tcp@dns-test-service.dns-4139 wheezy_udp@dns-test-service.dns-4139.svc wheezy_tcp@dns-test-service.dns-4139.svc wheezy_udp@_http._tcp.dns-test-service.dns-4139.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4139.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4139 jessie_tcp@dns-test-service.dns-4139 jessie_udp@dns-test-service.dns-4139.svc jessie_tcp@dns-test-service.dns-4139.svc jessie_udp@_http._tcp.dns-test-service.dns-4139.svc jessie_tcp@_http._tcp.dns-test-service.dns-4139.svc]

Jan 25 22:27:23.221: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4139/dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a: the server could not find the requested resource (get pods dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a)
Jan 25 22:27:23.226: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4139/dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a: the server could not find the requested resource (get pods dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a)
Jan 25 22:27:23.230: INFO: Unable to read wheezy_udp@dns-test-service.dns-4139 from pod dns-4139/dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a: the server could not find the requested resource (get pods dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a)
Jan 25 22:27:23.233: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4139 from pod dns-4139/dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a: the server could not find the requested resource (get pods dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a)
Jan 25 22:27:23.238: INFO: Unable to read wheezy_udp@dns-test-service.dns-4139.svc from pod dns-4139/dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a: the server could not find the requested resource (get pods dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a)
Jan 25 22:27:23.242: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4139.svc from pod dns-4139/dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a: the server could not find the requested resource (get pods dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a)
Jan 25 22:27:23.246: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4139.svc from pod dns-4139/dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a: the server could not find the requested resource (get pods dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a)
Jan 25 22:27:23.250: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4139.svc from pod dns-4139/dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a: the server could not find the requested resource (get pods dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a)
Jan 25 22:27:23.274: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4139/dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a: the server could not find the requested resource (get pods dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a)
Jan 25 22:27:23.277: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4139/dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a: the server could not find the requested resource (get pods dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a)
Jan 25 22:27:23.280: INFO: Unable to read jessie_udp@dns-test-service.dns-4139 from pod dns-4139/dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a: the server could not find the requested resource (get pods dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a)
Jan 25 22:27:23.284: INFO: Unable to read jessie_tcp@dns-test-service.dns-4139 from pod dns-4139/dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a: the server could not find the requested resource (get pods dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a)
Jan 25 22:27:23.287: INFO: Unable to read jessie_udp@dns-test-service.dns-4139.svc from pod dns-4139/dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a: the server could not find the requested resource (get pods dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a)
Jan 25 22:27:23.290: INFO: Unable to read jessie_tcp@dns-test-service.dns-4139.svc from pod dns-4139/dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a: the server could not find the requested resource (get pods dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a)
Jan 25 22:27:23.293: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4139.svc from pod dns-4139/dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a: the server could not find the requested resource (get pods dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a)
Jan 25 22:27:23.297: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4139.svc from pod dns-4139/dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a: the server could not find the requested resource (get pods dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a)
Jan 25 22:27:23.326: INFO: Lookups using dns-4139/dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4139 wheezy_tcp@dns-test-service.dns-4139 wheezy_udp@dns-test-service.dns-4139.svc wheezy_tcp@dns-test-service.dns-4139.svc wheezy_udp@_http._tcp.dns-test-service.dns-4139.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4139.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4139 jessie_tcp@dns-test-service.dns-4139 jessie_udp@dns-test-service.dns-4139.svc jessie_tcp@dns-test-service.dns-4139.svc jessie_udp@_http._tcp.dns-test-service.dns-4139.svc jessie_tcp@_http._tcp.dns-test-service.dns-4139.svc]

Jan 25 22:27:28.225: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4139/dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a: the server could not find the requested resource (get pods dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a)
Jan 25 22:27:28.313: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4139/dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a: the server could not find the requested resource (get pods dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a)
Jan 25 22:27:28.318: INFO: Unable to read wheezy_udp@dns-test-service.dns-4139 from pod dns-4139/dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a: the server could not find the requested resource (get pods dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a)
Jan 25 22:27:28.323: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4139 from pod dns-4139/dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a: the server could not find the requested resource (get pods dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a)
Jan 25 22:27:28.327: INFO: Unable to read wheezy_udp@dns-test-service.dns-4139.svc from pod dns-4139/dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a: the server could not find the requested resource (get pods dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a)
Jan 25 22:27:28.330: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4139.svc from pod dns-4139/dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a: the server could not find the requested resource (get pods dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a)
Jan 25 22:27:28.333: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4139.svc from pod dns-4139/dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a: the server could not find the requested resource (get pods dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a)
Jan 25 22:27:28.337: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4139.svc from pod dns-4139/dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a: the server could not find the requested resource (get pods dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a)
Jan 25 22:27:28.365: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4139/dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a: the server could not find the requested resource (get pods dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a)
Jan 25 22:27:28.369: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4139/dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a: the server could not find the requested resource (get pods dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a)
Jan 25 22:27:28.373: INFO: Unable to read jessie_udp@dns-test-service.dns-4139 from pod dns-4139/dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a: the server could not find the requested resource (get pods dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a)
Jan 25 22:27:28.376: INFO: Unable to read jessie_tcp@dns-test-service.dns-4139 from pod dns-4139/dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a: the server could not find the requested resource (get pods dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a)
Jan 25 22:27:28.381: INFO: Unable to read jessie_udp@dns-test-service.dns-4139.svc from pod dns-4139/dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a: the server could not find the requested resource (get pods dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a)
Jan 25 22:27:28.385: INFO: Unable to read jessie_tcp@dns-test-service.dns-4139.svc from pod dns-4139/dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a: the server could not find the requested resource (get pods dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a)
Jan 25 22:27:28.389: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4139.svc from pod dns-4139/dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a: the server could not find the requested resource (get pods dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a)
Jan 25 22:27:28.392: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4139.svc from pod dns-4139/dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a: the server could not find the requested resource (get pods dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a)
Jan 25 22:27:28.413: INFO: Lookups using dns-4139/dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4139 wheezy_tcp@dns-test-service.dns-4139 wheezy_udp@dns-test-service.dns-4139.svc wheezy_tcp@dns-test-service.dns-4139.svc wheezy_udp@_http._tcp.dns-test-service.dns-4139.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4139.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4139 jessie_tcp@dns-test-service.dns-4139 jessie_udp@dns-test-service.dns-4139.svc jessie_tcp@dns-test-service.dns-4139.svc jessie_udp@_http._tcp.dns-test-service.dns-4139.svc jessie_tcp@_http._tcp.dns-test-service.dns-4139.svc]

Jan 25 22:27:33.338: INFO: DNS probes using dns-4139/dns-test-a65ba2b2-5942-4820-97be-c091a8f3f78a succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:27:33.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-4139" for this suite.

• [SLOW TEST:43.146 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":278,"completed":186,"skipped":3190,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:27:33.717: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:27:43.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-5347" for this suite.

• [SLOW TEST:10.206 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when scheduling a busybox command in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":278,"completed":187,"skipped":3214,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:27:43.924: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jan 25 22:27:52.170: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:27:52.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-2862" for this suite.

• [SLOW TEST:8.288 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":278,"completed":188,"skipped":3221,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:27:52.215: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-8408.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-8408.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8408.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-8408.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-8408.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8408.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe /etc/hosts
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 25 22:28:04.503: INFO: DNS probes using dns-8408/dns-test-23ae1a91-a4a7-40f0-969e-743cb53697f5 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:28:04.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-8408" for this suite.

• [SLOW TEST:12.382 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":278,"completed":189,"skipped":3245,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SS
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:28:04.596: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap that has name configmap-test-emptyKey-57f88e68-493f-40e6-abcb-c4c48d15d812
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:28:04.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6300" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":278,"completed":190,"skipped":3247,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:28:04.665: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jan 25 22:28:04.829: INFO: Waiting up to 5m0s for pod "downwardapi-volume-96abcbbd-ec31-4d78-8730-959a187cc140" in namespace "projected-7675" to be "success or failure"
Jan 25 22:28:04.900: INFO: Pod "downwardapi-volume-96abcbbd-ec31-4d78-8730-959a187cc140": Phase="Pending", Reason="", readiness=false. Elapsed: 71.212516ms
Jan 25 22:28:06.907: INFO: Pod "downwardapi-volume-96abcbbd-ec31-4d78-8730-959a187cc140": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078231463s
Jan 25 22:28:08.936: INFO: Pod "downwardapi-volume-96abcbbd-ec31-4d78-8730-959a187cc140": Phase="Pending", Reason="", readiness=false. Elapsed: 4.107396943s
Jan 25 22:28:10.944: INFO: Pod "downwardapi-volume-96abcbbd-ec31-4d78-8730-959a187cc140": Phase="Pending", Reason="", readiness=false. Elapsed: 6.114859009s
Jan 25 22:28:13.012: INFO: Pod "downwardapi-volume-96abcbbd-ec31-4d78-8730-959a187cc140": Phase="Pending", Reason="", readiness=false. Elapsed: 8.183739763s
Jan 25 22:28:15.022: INFO: Pod "downwardapi-volume-96abcbbd-ec31-4d78-8730-959a187cc140": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.193714107s
STEP: Saw pod success
Jan 25 22:28:15.023: INFO: Pod "downwardapi-volume-96abcbbd-ec31-4d78-8730-959a187cc140" satisfied condition "success or failure"
Jan 25 22:28:15.027: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-96abcbbd-ec31-4d78-8730-959a187cc140 container client-container: 
STEP: delete the pod
Jan 25 22:28:15.239: INFO: Waiting for pod downwardapi-volume-96abcbbd-ec31-4d78-8730-959a187cc140 to disappear
Jan 25 22:28:15.246: INFO: Pod downwardapi-volume-96abcbbd-ec31-4d78-8730-959a187cc140 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:28:15.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7675" for this suite.

• [SLOW TEST:10.596 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":191,"skipped":3271,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:28:15.262: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Jan 25 22:28:33.528: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 25 22:28:33.533: INFO: Pod pod-with-poststart-http-hook still exists
Jan 25 22:28:35.534: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 25 22:28:35.540: INFO: Pod pod-with-poststart-http-hook still exists
Jan 25 22:28:37.534: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 25 22:28:37.541: INFO: Pod pod-with-poststart-http-hook still exists
Jan 25 22:28:39.534: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 25 22:28:39.543: INFO: Pod pod-with-poststart-http-hook still exists
Jan 25 22:28:41.536: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 25 22:28:41.547: INFO: Pod pod-with-poststart-http-hook still exists
Jan 25 22:28:43.534: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 25 22:28:43.543: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:28:43.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-7123" for this suite.

• [SLOW TEST:28.304 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":278,"completed":192,"skipped":3277,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:28:43.567: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-map-820cccd3-57d4-4863-9ea5-e5f5b2ea3246
STEP: Creating a pod to test consume secrets
Jan 25 22:28:43.683: INFO: Waiting up to 5m0s for pod "pod-secrets-e73dfde6-ca09-4813-b0df-93afe7831afb" in namespace "secrets-8320" to be "success or failure"
Jan 25 22:28:43.712: INFO: Pod "pod-secrets-e73dfde6-ca09-4813-b0df-93afe7831afb": Phase="Pending", Reason="", readiness=false. Elapsed: 28.689672ms
Jan 25 22:28:45.722: INFO: Pod "pod-secrets-e73dfde6-ca09-4813-b0df-93afe7831afb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038192875s
Jan 25 22:28:47.731: INFO: Pod "pod-secrets-e73dfde6-ca09-4813-b0df-93afe7831afb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047275614s
Jan 25 22:28:49.745: INFO: Pod "pod-secrets-e73dfde6-ca09-4813-b0df-93afe7831afb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.061847435s
Jan 25 22:28:51.753: INFO: Pod "pod-secrets-e73dfde6-ca09-4813-b0df-93afe7831afb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.06925501s
Jan 25 22:28:53.761: INFO: Pod "pod-secrets-e73dfde6-ca09-4813-b0df-93afe7831afb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.077858033s
STEP: Saw pod success
Jan 25 22:28:53.761: INFO: Pod "pod-secrets-e73dfde6-ca09-4813-b0df-93afe7831afb" satisfied condition "success or failure"
Jan 25 22:28:53.766: INFO: Trying to get logs from node jerma-node pod pod-secrets-e73dfde6-ca09-4813-b0df-93afe7831afb container secret-volume-test: 
STEP: delete the pod
Jan 25 22:28:53.875: INFO: Waiting for pod pod-secrets-e73dfde6-ca09-4813-b0df-93afe7831afb to disappear
Jan 25 22:28:53.885: INFO: Pod pod-secrets-e73dfde6-ca09-4813-b0df-93afe7831afb no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:28:53.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8320" for this suite.

• [SLOW TEST:10.344 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":193,"skipped":3281,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:28:53.912: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:29:28.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-3224" for this suite.

• [SLOW TEST:34.213 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":278,"completed":194,"skipped":3317,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
[sig-cli] Kubectl client Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:29:28.127: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: starting the proxy server
Jan 25 22:29:28.198: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:29:28.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4633" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0  [Conformance]","total":278,"completed":195,"skipped":3317,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a mutating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:29:28.394: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 25 22:29:30.197: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:0, Replicas:0, UpdatedReplicas:0, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715588170, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715588170, loc:(*time.Location)(0x7d100a0)}}, Reason:"NewReplicaSetCreated", Message:"Created new replica set \"sample-webhook-deployment-5f65f8c764\""}}, CollisionCount:(*int32)(nil)}
Jan 25 22:29:32.248: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715588170, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715588170, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715588170, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715588170, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 22:29:34.206: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715588170, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715588170, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715588170, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715588170, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 22:29:36.207: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715588170, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715588170, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715588170, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715588170, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 25 22:29:39.235: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] patching/updating a mutating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a mutating webhook configuration
STEP: Updating a mutating webhook configuration's rules to not include the create operation
STEP: Creating a configMap that should not be mutated
STEP: Patching a mutating webhook configuration's rules to include the create operation
STEP: Creating a configMap that should be mutated
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:29:39.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-6124" for this suite.
STEP: Destroying namespace "webhook-6124-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:11.403 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a mutating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":278,"completed":196,"skipped":3325,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should have a working scale subresource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:29:39.798: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-8125
[It] should have a working scale subresource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating statefulset ss in namespace statefulset-8125
Jan 25 22:29:40.011: INFO: Found 0 stateful pods, waiting for 1
Jan 25 22:29:50.022: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: getting scale subresource
STEP: updating a scale subresource
STEP: verifying the statefulset Spec.Replicas was modified
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Jan 25 22:29:50.051: INFO: Deleting all statefulset in ns statefulset-8125
Jan 25 22:29:50.070: INFO: Scaling statefulset ss to 0
Jan 25 22:30:00.192: INFO: Waiting for statefulset status.replicas updated to 0
Jan 25 22:30:00.196: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:30:00.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-8125" for this suite.

• [SLOW TEST:20.442 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    should have a working scale subresource [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":278,"completed":197,"skipped":3352,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
S
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:30:00.241: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 25 22:30:00.404: INFO: (0) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 22.524352ms)
Jan 25 22:30:00.411: INFO: (1) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 6.69072ms)
Jan 25 22:30:00.468: INFO: (2) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 56.896235ms)
Jan 25 22:30:00.556: INFO: (3) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 87.155629ms)
Jan 25 22:30:00.588: INFO: (4) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 31.682834ms)
Jan 25 22:30:00.592: INFO: (5) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 4.593156ms)
Jan 25 22:30:00.597: INFO: (6) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 5.211367ms)
Jan 25 22:30:00.603: INFO: (7) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 5.519337ms)
Jan 25 22:30:00.608: INFO: (8) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 5.27518ms)
Jan 25 22:30:00.614: INFO: (9) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 5.255682ms)
Jan 25 22:30:00.618: INFO: (10) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 4.221254ms)
Jan 25 22:30:00.623: INFO: (11) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 4.634575ms)
Jan 25 22:30:00.629: INFO: (12) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 6.177534ms)
Jan 25 22:30:00.633: INFO: (13) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 4.512787ms)
Jan 25 22:30:00.639: INFO: (14) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 5.36501ms)
Jan 25 22:30:00.644: INFO: (15) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 5.177561ms)
Jan 25 22:30:00.650: INFO: (16) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 5.536213ms)
Jan 25 22:30:00.654: INFO: (17) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 4.392766ms)
Jan 25 22:30:00.660: INFO: (18) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 5.559725ms)
Jan 25 22:30:00.664: INFO: (19) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 4.114264ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:30:00.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-960" for this suite.
•{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]","total":278,"completed":198,"skipped":3353,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:30:00.678: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:329
[It] should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the initial replication controller
Jan 25 22:30:00.826: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8230'
Jan 25 22:30:01.347: INFO: stderr: ""
Jan 25 22:30:01.347: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 25 22:30:01.348: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8230'
Jan 25 22:30:01.525: INFO: stderr: ""
Jan 25 22:30:01.525: INFO: stdout: "update-demo-nautilus-gkzrz update-demo-nautilus-thhbk "
Jan 25 22:30:01.526: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gkzrz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8230'
Jan 25 22:30:01.656: INFO: stderr: ""
Jan 25 22:30:01.656: INFO: stdout: ""
Jan 25 22:30:01.656: INFO: update-demo-nautilus-gkzrz is created but not running
Jan 25 22:30:06.657: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8230'
Jan 25 22:30:07.117: INFO: stderr: ""
Jan 25 22:30:07.117: INFO: stdout: "update-demo-nautilus-gkzrz update-demo-nautilus-thhbk "
Jan 25 22:30:07.118: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gkzrz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8230'
Jan 25 22:30:07.701: INFO: stderr: ""
Jan 25 22:30:07.701: INFO: stdout: ""
Jan 25 22:30:07.701: INFO: update-demo-nautilus-gkzrz is created but not running
Jan 25 22:30:12.702: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8230'
Jan 25 22:30:12.904: INFO: stderr: ""
Jan 25 22:30:12.904: INFO: stdout: "update-demo-nautilus-gkzrz update-demo-nautilus-thhbk "
Jan 25 22:30:12.905: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gkzrz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8230'
Jan 25 22:30:13.080: INFO: stderr: ""
Jan 25 22:30:13.080: INFO: stdout: "true"
Jan 25 22:30:13.080: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gkzrz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8230'
Jan 25 22:30:13.196: INFO: stderr: ""
Jan 25 22:30:13.196: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 25 22:30:13.196: INFO: validating pod update-demo-nautilus-gkzrz
Jan 25 22:30:13.203: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 25 22:30:13.203: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 25 22:30:13.203: INFO: update-demo-nautilus-gkzrz is verified up and running
Jan 25 22:30:13.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-thhbk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8230'
Jan 25 22:30:13.295: INFO: stderr: ""
Jan 25 22:30:13.296: INFO: stdout: "true"
Jan 25 22:30:13.296: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-thhbk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8230'
Jan 25 22:30:13.463: INFO: stderr: ""
Jan 25 22:30:13.463: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 25 22:30:13.463: INFO: validating pod update-demo-nautilus-thhbk
Jan 25 22:30:13.489: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 25 22:30:13.489: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 25 22:30:13.489: INFO: update-demo-nautilus-thhbk is verified up and running
STEP: rolling-update to new replication controller
Jan 25 22:30:13.492: INFO: scanned /root for discovery docs: 
Jan 25 22:30:13.492: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-8230'
Jan 25 22:30:41.673: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Jan 25 22:30:41.673: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 25 22:30:41.673: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8230'
Jan 25 22:30:41.832: INFO: stderr: ""
Jan 25 22:30:41.832: INFO: stdout: "update-demo-kitten-52mwt update-demo-kitten-6bngb update-demo-nautilus-gkzrz "
STEP: Replicas for name=update-demo: expected=2 actual=3
Jan 25 22:30:46.833: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8230'
Jan 25 22:30:47.019: INFO: stderr: ""
Jan 25 22:30:47.020: INFO: stdout: "update-demo-kitten-52mwt update-demo-kitten-6bngb "
Jan 25 22:30:47.020: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-52mwt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8230'
Jan 25 22:30:47.128: INFO: stderr: ""
Jan 25 22:30:47.129: INFO: stdout: "true"
Jan 25 22:30:47.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-52mwt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8230'
Jan 25 22:30:47.234: INFO: stderr: ""
Jan 25 22:30:47.234: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Jan 25 22:30:47.234: INFO: validating pod update-demo-kitten-52mwt
Jan 25 22:30:47.260: INFO: got data: {
  "image": "kitten.jpg"
}

Jan 25 22:30:47.260: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Jan 25 22:30:47.261: INFO: update-demo-kitten-52mwt is verified up and running
Jan 25 22:30:47.261: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-6bngb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8230'
Jan 25 22:30:47.422: INFO: stderr: ""
Jan 25 22:30:47.422: INFO: stdout: "true"
Jan 25 22:30:47.422: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-6bngb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8230'
Jan 25 22:30:47.524: INFO: stderr: ""
Jan 25 22:30:47.524: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Jan 25 22:30:47.524: INFO: validating pod update-demo-kitten-6bngb
Jan 25 22:30:47.530: INFO: got data: {
  "image": "kitten.jpg"
}

Jan 25 22:30:47.530: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Jan 25 22:30:47.530: INFO: update-demo-kitten-6bngb is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:30:47.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8230" for this suite.

• [SLOW TEST:46.860 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:327
    should do a rolling update of a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller  [Conformance]","total":278,"completed":199,"skipped":3365,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:30:47.538: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:30:57.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-1403" for this suite.

• [SLOW TEST:10.287 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when scheduling a busybox Pod with hostAliases
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":200,"skipped":3372,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:30:57.827: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name cm-test-opt-del-18db02fb-df8b-4d6c-9d45-6b39892370d8
STEP: Creating configMap with name cm-test-opt-upd-db609d4a-4d4c-4b25-bf6b-b0838c60cfa7
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-18db02fb-df8b-4d6c-9d45-6b39892370d8
STEP: Updating configmap cm-test-opt-upd-db609d4a-4d4c-4b25-bf6b-b0838c60cfa7
STEP: Creating configMap with name cm-test-opt-create-01645fa0-a952-42b4-b1f0-b55e1a448cca
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:32:37.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9869" for this suite.

• [SLOW TEST:99.566 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":201,"skipped":3389,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:32:37.394: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W0125 22:32:40.898431       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 25 22:32:40.898: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:32:40.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-2125" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":278,"completed":202,"skipped":3396,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  custom resource defaulting for requests and from storage works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:32:40.930: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] custom resource defaulting for requests and from storage works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 25 22:32:41.191: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:32:42.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-2524" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works  [Conformance]","total":278,"completed":203,"skipped":3422,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
S
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:32:42.682: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Jan 25 22:32:43.004: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan 25 22:32:43.027: INFO: Waiting for terminating namespaces to be deleted...
Jan 25 22:32:43.156: INFO: 
Logging pods the kubelet thinks is on node jerma-node before test
Jan 25 22:32:43.186: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded)
Jan 25 22:32:43.186: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 25 22:32:43.186: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded)
Jan 25 22:32:43.186: INFO: 	Container weave ready: true, restart count 1
Jan 25 22:32:43.186: INFO: 	Container weave-npc ready: true, restart count 0
Jan 25 22:32:43.186: INFO: pod-configmaps-927477cd-807a-462a-a7b1-0856d792a2bb from configmap-9869 started at 2020-01-25 22:30:58 +0000 UTC (3 container statuses recorded)
Jan 25 22:32:43.186: INFO: 	Container createcm-volume-test ready: true, restart count 0
Jan 25 22:32:43.186: INFO: 	Container delcm-volume-test ready: true, restart count 0
Jan 25 22:32:43.186: INFO: 	Container updcm-volume-test ready: true, restart count 0
Jan 25 22:32:43.186: INFO: 
Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test
Jan 25 22:32:43.228: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Jan 25 22:32:43.228: INFO: 	Container coredns ready: true, restart count 0
Jan 25 22:32:43.228: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Jan 25 22:32:43.228: INFO: 	Container coredns ready: true, restart count 0
Jan 25 22:32:43.228: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Jan 25 22:32:43.228: INFO: 	Container kube-controller-manager ready: true, restart count 3
Jan 25 22:32:43.228: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded)
Jan 25 22:32:43.228: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 25 22:32:43.228: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded)
Jan 25 22:32:43.228: INFO: 	Container weave ready: true, restart count 0
Jan 25 22:32:43.228: INFO: 	Container weave-npc ready: true, restart count 0
Jan 25 22:32:43.228: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Jan 25 22:32:43.228: INFO: 	Container kube-scheduler ready: true, restart count 4
Jan 25 22:32:43.228: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Jan 25 22:32:43.228: INFO: 	Container kube-apiserver ready: true, restart count 1
Jan 25 22:32:43.228: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Jan 25 22:32:43.228: INFO: 	Container etcd ready: true, restart count 1
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: verifying the node has the label node jerma-node
STEP: verifying the node has the label node jerma-server-mvvl6gufaqub
Jan 25 22:32:43.753: INFO: Pod pod-configmaps-927477cd-807a-462a-a7b1-0856d792a2bb requesting resource cpu=0m on Node jerma-node
Jan 25 22:32:43.753: INFO: Pod coredns-6955765f44-bhnn4 requesting resource cpu=100m on Node jerma-server-mvvl6gufaqub
Jan 25 22:32:43.753: INFO: Pod coredns-6955765f44-bwd85 requesting resource cpu=100m on Node jerma-server-mvvl6gufaqub
Jan 25 22:32:43.753: INFO: Pod etcd-jerma-server-mvvl6gufaqub requesting resource cpu=0m on Node jerma-server-mvvl6gufaqub
Jan 25 22:32:43.753: INFO: Pod kube-apiserver-jerma-server-mvvl6gufaqub requesting resource cpu=250m on Node jerma-server-mvvl6gufaqub
Jan 25 22:32:43.753: INFO: Pod kube-controller-manager-jerma-server-mvvl6gufaqub requesting resource cpu=200m on Node jerma-server-mvvl6gufaqub
Jan 25 22:32:43.753: INFO: Pod kube-proxy-chkps requesting resource cpu=0m on Node jerma-server-mvvl6gufaqub
Jan 25 22:32:43.753: INFO: Pod kube-proxy-dsf66 requesting resource cpu=0m on Node jerma-node
Jan 25 22:32:43.754: INFO: Pod kube-scheduler-jerma-server-mvvl6gufaqub requesting resource cpu=100m on Node jerma-server-mvvl6gufaqub
Jan 25 22:32:43.754: INFO: Pod weave-net-kz8lv requesting resource cpu=20m on Node jerma-node
Jan 25 22:32:43.754: INFO: Pod weave-net-z6tjf requesting resource cpu=20m on Node jerma-server-mvvl6gufaqub
STEP: Starting Pods to consume most of the cluster CPU.
Jan 25 22:32:43.754: INFO: Creating a pod which consumes cpu=2786m on Node jerma-node
Jan 25 22:32:43.923: INFO: Creating a pod which consumes cpu=2261m on Node jerma-server-mvvl6gufaqub
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-18d11b09-be34-44e4-95c6-c7cd0053a8b2.15ed41f55738a2ee], Reason = [Scheduled], Message = [Successfully assigned sched-pred-7070/filler-pod-18d11b09-be34-44e4-95c6-c7cd0053a8b2 to jerma-server-mvvl6gufaqub]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-18d11b09-be34-44e4-95c6-c7cd0053a8b2.15ed41f686f1720e], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-18d11b09-be34-44e4-95c6-c7cd0053a8b2.15ed41f76a8f12b3], Reason = [Created], Message = [Created container filler-pod-18d11b09-be34-44e4-95c6-c7cd0053a8b2]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-18d11b09-be34-44e4-95c6-c7cd0053a8b2.15ed41f78e404c9f], Reason = [Started], Message = [Started container filler-pod-18d11b09-be34-44e4-95c6-c7cd0053a8b2]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-b526a198-caa0-4e76-a25d-e243bae467d0.15ed41f54b9cdee8], Reason = [Scheduled], Message = [Successfully assigned sched-pred-7070/filler-pod-b526a198-caa0-4e76-a25d-e243bae467d0 to jerma-node]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-b526a198-caa0-4e76-a25d-e243bae467d0.15ed41f682aa686c], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-b526a198-caa0-4e76-a25d-e243bae467d0.15ed41f785664221], Reason = [Created], Message = [Created container filler-pod-b526a198-caa0-4e76-a25d-e243bae467d0]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-b526a198-caa0-4e76-a25d-e243bae467d0.15ed41f79e38b1a5], Reason = [Started], Message = [Started container filler-pod-b526a198-caa0-4e76-a25d-e243bae467d0]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.15ed41f7ae4b03b7], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 Insufficient cpu.]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.15ed41f7afdc1204], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 Insufficient cpu.]
STEP: removing the label node off the node jerma-server-mvvl6gufaqub
STEP: verifying the node doesn't have the label node
STEP: removing the label node off the node jerma-node
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:32:57.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-7070" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77

• [SLOW TEST:14.894 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","total":278,"completed":204,"skipped":3423,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:32:57.578: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir volume type on node default medium
Jan 25 22:32:57.692: INFO: Waiting up to 5m0s for pod "pod-211faaa2-d425-498c-8e1b-97d0f2ee6f60" in namespace "emptydir-1080" to be "success or failure"
Jan 25 22:32:57.702: INFO: Pod "pod-211faaa2-d425-498c-8e1b-97d0f2ee6f60": Phase="Pending", Reason="", readiness=false. Elapsed: 9.156654ms
Jan 25 22:32:59.831: INFO: Pod "pod-211faaa2-d425-498c-8e1b-97d0f2ee6f60": Phase="Pending", Reason="", readiness=false. Elapsed: 2.138887006s
Jan 25 22:33:01.838: INFO: Pod "pod-211faaa2-d425-498c-8e1b-97d0f2ee6f60": Phase="Pending", Reason="", readiness=false. Elapsed: 4.145781579s
Jan 25 22:33:03.898: INFO: Pod "pod-211faaa2-d425-498c-8e1b-97d0f2ee6f60": Phase="Pending", Reason="", readiness=false. Elapsed: 6.206040829s
Jan 25 22:33:06.376: INFO: Pod "pod-211faaa2-d425-498c-8e1b-97d0f2ee6f60": Phase="Pending", Reason="", readiness=false. Elapsed: 8.683548423s
Jan 25 22:33:08.383: INFO: Pod "pod-211faaa2-d425-498c-8e1b-97d0f2ee6f60": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.690372851s
STEP: Saw pod success
Jan 25 22:33:08.383: INFO: Pod "pod-211faaa2-d425-498c-8e1b-97d0f2ee6f60" satisfied condition "success or failure"
Jan 25 22:33:08.387: INFO: Trying to get logs from node jerma-server-mvvl6gufaqub pod pod-211faaa2-d425-498c-8e1b-97d0f2ee6f60 container test-container: 
STEP: delete the pod
Jan 25 22:33:09.604: INFO: Waiting for pod pod-211faaa2-d425-498c-8e1b-97d0f2ee6f60 to disappear
Jan 25 22:33:09.879: INFO: Pod pod-211faaa2-d425-498c-8e1b-97d0f2ee6f60 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:33:09.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1080" for this suite.

• [SLOW TEST:12.700 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":205,"skipped":3449,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing mutating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:33:10.280: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 25 22:33:11.371: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 25 22:33:13.400: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715588391, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715588391, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715588391, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715588391, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 22:33:15.429: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715588391, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715588391, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715588391, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715588391, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 22:33:17.411: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715588391, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715588391, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715588391, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715588391, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 25 22:33:20.458: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] listing mutating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Listing all of the created validation webhooks
STEP: Creating a configMap that should be mutated
STEP: Deleting the collection of validation webhooks
STEP: Creating a configMap that should not be mutated
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:33:21.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-2454" for this suite.
STEP: Destroying namespace "webhook-2454-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:11.053 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing mutating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":278,"completed":206,"skipped":3474,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:33:21.334: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:33:31.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-342" for this suite.

• [SLOW TEST:10.110 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":278,"completed":207,"skipped":3478,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:33:31.445: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-map-6347f41f-452f-452d-9ee2-9df5d5478bed
STEP: Creating a pod to test consume configMaps
Jan 25 22:33:31.648: INFO: Waiting up to 5m0s for pod "pod-configmaps-55727407-d73b-4734-8d94-9b8d9ec8b8d5" in namespace "configmap-8593" to be "success or failure"
Jan 25 22:33:31.693: INFO: Pod "pod-configmaps-55727407-d73b-4734-8d94-9b8d9ec8b8d5": Phase="Pending", Reason="", readiness=false. Elapsed: 44.908153ms
Jan 25 22:33:33.704: INFO: Pod "pod-configmaps-55727407-d73b-4734-8d94-9b8d9ec8b8d5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055944033s
Jan 25 22:33:35.712: INFO: Pod "pod-configmaps-55727407-d73b-4734-8d94-9b8d9ec8b8d5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063374614s
Jan 25 22:33:37.716: INFO: Pod "pod-configmaps-55727407-d73b-4734-8d94-9b8d9ec8b8d5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.067551486s
Jan 25 22:33:39.722: INFO: Pod "pod-configmaps-55727407-d73b-4734-8d94-9b8d9ec8b8d5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.073729468s
Jan 25 22:33:41.746: INFO: Pod "pod-configmaps-55727407-d73b-4734-8d94-9b8d9ec8b8d5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.097755272s
Jan 25 22:33:43.767: INFO: Pod "pod-configmaps-55727407-d73b-4734-8d94-9b8d9ec8b8d5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.118246998s
STEP: Saw pod success
Jan 25 22:33:43.767: INFO: Pod "pod-configmaps-55727407-d73b-4734-8d94-9b8d9ec8b8d5" satisfied condition "success or failure"
Jan 25 22:33:43.834: INFO: Trying to get logs from node jerma-node pod pod-configmaps-55727407-d73b-4734-8d94-9b8d9ec8b8d5 container configmap-volume-test: 
STEP: delete the pod
Jan 25 22:33:44.007: INFO: Waiting for pod pod-configmaps-55727407-d73b-4734-8d94-9b8d9ec8b8d5 to disappear
Jan 25 22:33:44.017: INFO: Pod pod-configmaps-55727407-d73b-4734-8d94-9b8d9ec8b8d5 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:33:44.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8593" for this suite.

• [SLOW TEST:12.585 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":208,"skipped":3508,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:33:44.031: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name s-test-opt-del-f3d885c4-9fa2-4ec8-811a-153d02e3e060
STEP: Creating secret with name s-test-opt-upd-9712b9b9-8f94-4228-bfda-5ed7f974ede0
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-f3d885c4-9fa2-4ec8-811a-153d02e3e060
STEP: Updating secret s-test-opt-upd-9712b9b9-8f94-4228-bfda-5ed7f974ede0
STEP: Creating secret with name s-test-opt-create-502fa020-34e3-45e5-9531-d556b0947d87
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:35:13.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3579" for this suite.

• [SLOW TEST:89.926 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":209,"skipped":3521,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD with validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:35:13.959: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD with validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 25 22:35:14.025: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with known and required properties
Jan 25 22:35:17.785: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7195 create -f -'
Jan 25 22:35:21.072: INFO: stderr: ""
Jan 25 22:35:21.072: INFO: stdout: "e2e-test-crd-publish-openapi-6764-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
Jan 25 22:35:21.072: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7195 delete e2e-test-crd-publish-openapi-6764-crds test-foo'
Jan 25 22:35:21.362: INFO: stderr: ""
Jan 25 22:35:21.362: INFO: stdout: "e2e-test-crd-publish-openapi-6764-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
Jan 25 22:35:21.362: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7195 apply -f -'
Jan 25 22:35:21.654: INFO: stderr: ""
Jan 25 22:35:21.654: INFO: stdout: "e2e-test-crd-publish-openapi-6764-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
Jan 25 22:35:21.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7195 delete e2e-test-crd-publish-openapi-6764-crds test-foo'
Jan 25 22:35:21.798: INFO: stderr: ""
Jan 25 22:35:21.798: INFO: stdout: "e2e-test-crd-publish-openapi-6764-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema
Jan 25 22:35:21.798: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7195 create -f -'
Jan 25 22:35:22.165: INFO: rc: 1
Jan 25 22:35:22.166: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7195 apply -f -'
Jan 25 22:35:22.717: INFO: rc: 1
STEP: client-side validation (kubectl create and apply) rejects request without required properties
Jan 25 22:35:22.717: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7195 create -f -'
Jan 25 22:35:23.146: INFO: rc: 1
Jan 25 22:35:23.146: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7195 apply -f -'
Jan 25 22:35:23.644: INFO: rc: 1
STEP: kubectl explain works to explain CR properties
Jan 25 22:35:23.645: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6764-crds'
Jan 25 22:35:23.984: INFO: stderr: ""
Jan 25 22:35:23.985: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-6764-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n     Foo CRD for Testing\n\nFIELDS:\n   apiVersion\t\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t\n     Specification of Foo\n\n   status\t\n     Status of Foo\n\n"
STEP: kubectl explain works to explain CR properties recursively
Jan 25 22:35:23.986: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6764-crds.metadata'
Jan 25 22:35:24.368: INFO: stderr: ""
Jan 25 22:35:24.368: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-6764-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n     ObjectMeta is metadata that all persisted resources must have, which\n     includes all objects users must create.\n\nFIELDS:\n   annotations\t\n     Annotations is an unstructured key value map stored with a resource that\n     may be set by external tools to store and retrieve arbitrary metadata. They\n     are not queryable and should be preserved when modifying objects. More\n     info: http://kubernetes.io/docs/user-guide/annotations\n\n   clusterName\t\n     The name of the cluster which the object belongs to. This is used to\n     distinguish resources with same name and namespace in different clusters.\n     This field is not set anywhere right now and apiserver is going to ignore\n     it if set in create or update request.\n\n   creationTimestamp\t\n     CreationTimestamp is a timestamp representing the server time when this\n     object was created. It is not guaranteed to be set in happens-before order\n     across separate operations. Clients may not set this value. It is\n     represented in RFC3339 form and is in UTC. Populated by the system.\n     Read-only. Null for lists. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   deletionGracePeriodSeconds\t\n     Number of seconds allowed for this object to gracefully terminate before it\n     will be removed from the system. Only set when deletionTimestamp is also\n     set. May only be shortened. Read-only.\n\n   deletionTimestamp\t\n     DeletionTimestamp is RFC 3339 date and time at which this resource will be\n     deleted. This field is set by the server when a graceful deletion is\n     requested by the user, and is not directly settable by a client. The\n     resource is expected to be deleted (no longer visible from resource lists,\n     and not reachable by name) after the time in this field, once the\n     finalizers list is empty. As long as the finalizers list contains items,\n     deletion is blocked. Once the deletionTimestamp is set, this value may not\n     be unset or be set further into the future, although it may be shortened or\n     the resource may be deleted prior to this time. For example, a user may\n     request that a pod is deleted in 30 seconds. The Kubelet will react by\n     sending a graceful termination signal to the containers in the pod. After\n     that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n     to the container and after cleanup, remove the pod from the API. In the\n     presence of network partitions, this object may still exist after this\n     timestamp, until an administrator or automated process can determine the\n     resource is fully terminated. If not set, graceful deletion of the object\n     has not been requested. Populated by the system when a graceful deletion is\n     requested. Read-only. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   finalizers\t<[]string>\n     Must be empty before the object is deleted from the registry. Each entry is\n     an identifier for the responsible component that will remove the entry from\n     the list. If the deletionTimestamp of the object is non-nil, entries in\n     this list can only be removed. Finalizers may be processed and removed in\n     any order. Order is NOT enforced because it introduces significant risk of\n     stuck finalizers. finalizers is a shared field, any actor with permission\n     can reorder it. If the finalizer list is processed in order, then this can\n     lead to a situation in which the component responsible for the first\n     finalizer in the list is waiting for a signal (field value, external\n     system, or other) produced by a component responsible for a finalizer later\n     in the list, resulting in a deadlock. Without enforced ordering finalizers\n     are free to order amongst themselves and are not vulnerable to ordering\n     changes in the list.\n\n   generateName\t\n     GenerateName is an optional prefix, used by the server, to generate a\n     unique name ONLY IF the Name field has not been provided. If this field is\n     used, the name returned to the client will be different than the name\n     passed. This value will also be combined with a unique suffix. The provided\n     value has the same validation rules as the Name field, and may be truncated\n     by the length of the suffix required to make the value unique on the\n     server. If this field is specified and the generated name exists, the\n     server will NOT return a 409 - instead, it will either return 201 Created\n     or 500 with Reason ServerTimeout indicating a unique name could not be\n     found in the time allotted, and the client should retry (optionally after\n     the time indicated in the Retry-After header). Applied only if Name is not\n     specified. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n   generation\t\n     A sequence number representing a specific generation of the desired state.\n     Populated by the system. Read-only.\n\n   labels\t\n     Map of string keys and values that can be used to organize and categorize\n     (scope and select) objects. May match selectors of replication controllers\n     and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n   managedFields\t<[]Object>\n     ManagedFields maps workflow-id and version to the set of fields that are\n     managed by that workflow. This is mostly for internal housekeeping, and\n     users typically shouldn't need to set or understand this field. A workflow\n     can be the user's name, a controller's name, or the name of a specific\n     apply path like \"ci-cd\". The set of fields is always in the version that\n     the workflow used when modifying the object.\n\n   name\t\n     Name must be unique within a namespace. Is required when creating\n     resources, although some resources may allow a client to request the\n     generation of an appropriate name automatically. Name is primarily intended\n     for creation idempotence and configuration definition. Cannot be updated.\n     More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n   namespace\t\n     Namespace defines the space within each name must be unique. An empty\n     namespace is equivalent to the \"default\" namespace, but \"default\" is the\n     canonical representation. Not all objects are required to be scoped to a\n     namespace - the value of this field for those objects will be empty. Must\n     be a DNS_LABEL. Cannot be updated. More info:\n     http://kubernetes.io/docs/user-guide/namespaces\n\n   ownerReferences\t<[]Object>\n     List of objects depended by this object. If ALL objects in the list have\n     been deleted, this object will be garbage collected. If this object is\n     managed by a controller, then an entry in this list will point to this\n     controller, with the controller field set to true. There cannot be more\n     than one managing controller.\n\n   resourceVersion\t\n     An opaque value that represents the internal version of this object that\n     can be used by clients to determine when objects have changed. May be used\n     for optimistic concurrency, change detection, and the watch operation on a\n     resource or set of resources. Clients must treat these values as opaque and\n     passed unmodified back to the server. They may only be valid for a\n     particular resource or set of resources. Populated by the system.\n     Read-only. Value must be treated as opaque by clients and . More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n   selfLink\t\n     SelfLink is a URL representing this object. Populated by the system.\n     Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n     release and the field is planned to be removed in 1.21 release.\n\n   uid\t\n     UID is the unique in time and space value for this object. It is typically\n     generated by the server on successful creation of a resource and is not\n     allowed to change on PUT operations. Populated by the system. Read-only.\n     More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n"
Jan 25 22:35:24.369: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6764-crds.spec'
Jan 25 22:35:24.730: INFO: stderr: ""
Jan 25 22:35:24.731: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-6764-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n     Specification of Foo\n\nFIELDS:\n   bars\t<[]Object>\n     List of Bars and their specs.\n\n"
Jan 25 22:35:24.731: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6764-crds.spec.bars'
Jan 25 22:35:25.128: INFO: stderr: ""
Jan 25 22:35:25.128: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-6764-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n     List of Bars and their specs.\n\nFIELDS:\n   age\t\n     Age of Bar.\n\n   bazs\t<[]string>\n     List of Bazs.\n\n   name\t -required-\n     Name of Bar.\n\n"
STEP: kubectl explain works to return error when explain is called on property that doesn't exist
Jan 25 22:35:25.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6764-crds.spec.bars2'
Jan 25 22:35:25.583: INFO: rc: 1
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:35:28.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-7195" for this suite.

• [SLOW TEST:14.559 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD with validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":278,"completed":210,"skipped":3576,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:35:28.519: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating replication controller my-hostname-basic-1721b512-2cfd-4b6a-880f-74663a71bff6
Jan 25 22:35:28.777: INFO: Pod name my-hostname-basic-1721b512-2cfd-4b6a-880f-74663a71bff6: Found 0 pods out of 1
Jan 25 22:35:33.790: INFO: Pod name my-hostname-basic-1721b512-2cfd-4b6a-880f-74663a71bff6: Found 1 pods out of 1
Jan 25 22:35:33.790: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-1721b512-2cfd-4b6a-880f-74663a71bff6" are running
Jan 25 22:35:35.871: INFO: Pod "my-hostname-basic-1721b512-2cfd-4b6a-880f-74663a71bff6-64fn2" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-25 22:35:28 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-25 22:35:28 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-1721b512-2cfd-4b6a-880f-74663a71bff6]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-25 22:35:28 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-1721b512-2cfd-4b6a-880f-74663a71bff6]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-25 22:35:28 +0000 UTC Reason: Message:}])
Jan 25 22:35:35.872: INFO: Trying to dial the pod
Jan 25 22:35:40.899: INFO: Controller my-hostname-basic-1721b512-2cfd-4b6a-880f-74663a71bff6: Got expected result from replica 1 [my-hostname-basic-1721b512-2cfd-4b6a-880f-74663a71bff6-64fn2]: "my-hostname-basic-1721b512-2cfd-4b6a-880f-74663a71bff6-64fn2", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:35:40.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-6687" for this suite.

• [SLOW TEST:12.394 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","total":278,"completed":211,"skipped":3583,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:35:40.914: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-8437
[It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating stateful set ss in namespace statefulset-8437
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-8437
Jan 25 22:35:41.072: INFO: Found 0 stateful pods, waiting for 1
Jan 25 22:35:51.080: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Jan 25 22:35:51.083: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8437 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jan 25 22:35:51.739: INFO: stderr: "I0125 22:35:51.286473    2965 log.go:172] (0xc000128e70) (0xc00090a6e0) Create stream\nI0125 22:35:51.286921    2965 log.go:172] (0xc000128e70) (0xc00090a6e0) Stream added, broadcasting: 1\nI0125 22:35:51.299585    2965 log.go:172] (0xc000128e70) Reply frame received for 1\nI0125 22:35:51.299801    2965 log.go:172] (0xc000128e70) (0xc00090a000) Create stream\nI0125 22:35:51.299836    2965 log.go:172] (0xc000128e70) (0xc00090a000) Stream added, broadcasting: 3\nI0125 22:35:51.302528    2965 log.go:172] (0xc000128e70) Reply frame received for 3\nI0125 22:35:51.302641    2965 log.go:172] (0xc000128e70) (0xc00090a0a0) Create stream\nI0125 22:35:51.302657    2965 log.go:172] (0xc000128e70) (0xc00090a0a0) Stream added, broadcasting: 5\nI0125 22:35:51.304343    2965 log.go:172] (0xc000128e70) Reply frame received for 5\nI0125 22:35:51.465463    2965 log.go:172] (0xc000128e70) Data frame received for 5\nI0125 22:35:51.465640    2965 log.go:172] (0xc00090a0a0) (5) Data frame handling\nI0125 22:35:51.465675    2965 log.go:172] (0xc00090a0a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0125 22:35:51.610115    2965 log.go:172] (0xc000128e70) Data frame received for 3\nI0125 22:35:51.610226    2965 log.go:172] (0xc00090a000) (3) Data frame handling\nI0125 22:35:51.610294    2965 log.go:172] (0xc00090a000) (3) Data frame sent\nI0125 22:35:51.722966    2965 log.go:172] (0xc000128e70) (0xc00090a0a0) Stream removed, broadcasting: 5\nI0125 22:35:51.723203    2965 log.go:172] (0xc000128e70) Data frame received for 1\nI0125 22:35:51.723237    2965 log.go:172] (0xc00090a6e0) (1) Data frame handling\nI0125 22:35:51.723255    2965 log.go:172] (0xc00090a6e0) (1) Data frame sent\nI0125 22:35:51.723304    2965 log.go:172] (0xc000128e70) (0xc00090a6e0) Stream removed, broadcasting: 1\nI0125 22:35:51.723430    2965 log.go:172] (0xc000128e70) (0xc00090a000) Stream removed, broadcasting: 3\nI0125 22:35:51.723534    2965 log.go:172] (0xc000128e70) Go away received\nI0125 22:35:51.724539    2965 log.go:172] (0xc000128e70) (0xc00090a6e0) Stream removed, broadcasting: 1\nI0125 22:35:51.724631    2965 log.go:172] (0xc000128e70) (0xc00090a000) Stream removed, broadcasting: 3\nI0125 22:35:51.724696    2965 log.go:172] (0xc000128e70) (0xc00090a0a0) Stream removed, broadcasting: 5\n"
Jan 25 22:35:51.739: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jan 25 22:35:51.739: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jan 25 22:35:51.747: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Jan 25 22:36:01.758: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan 25 22:36:01.758: INFO: Waiting for statefulset status.replicas updated to 0
Jan 25 22:36:01.787: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Jan 25 22:36:01.787: INFO: ss-0  jerma-node  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 22:35:41 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 22:35:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 22:35:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 22:35:41 +0000 UTC  }]
Jan 25 22:36:01.787: INFO: 
Jan 25 22:36:01.787: INFO: StatefulSet ss has not reached scale 3, at 1
Jan 25 22:36:03.445: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.992351401s
Jan 25 22:36:04.714: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.334445896s
Jan 25 22:36:05.719: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.065636337s
Jan 25 22:36:06.728: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.059996789s
Jan 25 22:36:08.187: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.051422094s
Jan 25 22:36:09.196: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.59238694s
Jan 25 22:36:10.213: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.583478775s
Jan 25 22:36:11.223: INFO: Verifying statefulset ss doesn't scale past 3 for another 566.175443ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-8437
Jan 25 22:36:12.230: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8437 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 22:36:12.681: INFO: stderr: "I0125 22:36:12.473504    2985 log.go:172] (0xc0000f5600) (0xc0000c41e0) Create stream\nI0125 22:36:12.474139    2985 log.go:172] (0xc0000f5600) (0xc0000c41e0) Stream added, broadcasting: 1\nI0125 22:36:12.479355    2985 log.go:172] (0xc0000f5600) Reply frame received for 1\nI0125 22:36:12.479540    2985 log.go:172] (0xc0000f5600) (0xc000116000) Create stream\nI0125 22:36:12.479564    2985 log.go:172] (0xc0000f5600) (0xc000116000) Stream added, broadcasting: 3\nI0125 22:36:12.481871    2985 log.go:172] (0xc0000f5600) Reply frame received for 3\nI0125 22:36:12.482054    2985 log.go:172] (0xc0000f5600) (0xc0000c4280) Create stream\nI0125 22:36:12.482078    2985 log.go:172] (0xc0000f5600) (0xc0000c4280) Stream added, broadcasting: 5\nI0125 22:36:12.485219    2985 log.go:172] (0xc0000f5600) Reply frame received for 5\nI0125 22:36:12.592270    2985 log.go:172] (0xc0000f5600) Data frame received for 3\nI0125 22:36:12.592404    2985 log.go:172] (0xc000116000) (3) Data frame handling\nI0125 22:36:12.592429    2985 log.go:172] (0xc000116000) (3) Data frame sent\nI0125 22:36:12.592477    2985 log.go:172] (0xc0000f5600) Data frame received for 5\nI0125 22:36:12.592505    2985 log.go:172] (0xc0000c4280) (5) Data frame handling\nI0125 22:36:12.592529    2985 log.go:172] (0xc0000c4280) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0125 22:36:12.670153    2985 log.go:172] (0xc0000f5600) Data frame received for 1\nI0125 22:36:12.670266    2985 log.go:172] (0xc0000f5600) (0xc000116000) Stream removed, broadcasting: 3\nI0125 22:36:12.670350    2985 log.go:172] (0xc0000c41e0) (1) Data frame handling\nI0125 22:36:12.670374    2985 log.go:172] (0xc0000c41e0) (1) Data frame sent\nI0125 22:36:12.670382    2985 log.go:172] (0xc0000f5600) (0xc0000c41e0) Stream removed, broadcasting: 1\nI0125 22:36:12.670443    2985 log.go:172] (0xc0000f5600) (0xc0000c4280) Stream removed, broadcasting: 5\nI0125 22:36:12.670584    2985 log.go:172] (0xc0000f5600) Go away received\nI0125 22:36:12.672316    2985 log.go:172] (0xc0000f5600) (0xc0000c41e0) Stream removed, broadcasting: 1\nI0125 22:36:12.672328    2985 log.go:172] (0xc0000f5600) (0xc000116000) Stream removed, broadcasting: 3\nI0125 22:36:12.672333    2985 log.go:172] (0xc0000f5600) (0xc0000c4280) Stream removed, broadcasting: 5\n"
Jan 25 22:36:12.681: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jan 25 22:36:12.681: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jan 25 22:36:12.681: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8437 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 22:36:13.074: INFO: stderr: "I0125 22:36:12.841190    3009 log.go:172] (0xc0000f5340) (0xc00069fae0) Create stream\nI0125 22:36:12.841473    3009 log.go:172] (0xc0000f5340) (0xc00069fae0) Stream added, broadcasting: 1\nI0125 22:36:12.845563    3009 log.go:172] (0xc0000f5340) Reply frame received for 1\nI0125 22:36:12.845685    3009 log.go:172] (0xc0000f5340) (0xc000982000) Create stream\nI0125 22:36:12.845697    3009 log.go:172] (0xc0000f5340) (0xc000982000) Stream added, broadcasting: 3\nI0125 22:36:12.847144    3009 log.go:172] (0xc0000f5340) Reply frame received for 3\nI0125 22:36:12.847225    3009 log.go:172] (0xc0000f5340) (0xc000556000) Create stream\nI0125 22:36:12.847268    3009 log.go:172] (0xc0000f5340) (0xc000556000) Stream added, broadcasting: 5\nI0125 22:36:12.849104    3009 log.go:172] (0xc0000f5340) Reply frame received for 5\nI0125 22:36:12.973643    3009 log.go:172] (0xc0000f5340) Data frame received for 5\nI0125 22:36:12.973720    3009 log.go:172] (0xc000556000) (5) Data frame handling\nI0125 22:36:12.973747    3009 log.go:172] (0xc000556000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0125 22:36:12.978464    3009 log.go:172] (0xc0000f5340) Data frame received for 5\nI0125 22:36:12.978488    3009 log.go:172] (0xc000556000) (5) Data frame handling\nI0125 22:36:12.978502    3009 log.go:172] (0xc000556000) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0125 22:36:12.978643    3009 log.go:172] (0xc0000f5340) Data frame received for 3\nI0125 22:36:12.978686    3009 log.go:172] (0xc000982000) (3) Data frame handling\nI0125 22:36:12.978707    3009 log.go:172] (0xc000982000) (3) Data frame sent\nI0125 22:36:13.067691    3009 log.go:172] (0xc0000f5340) Data frame received for 1\nI0125 22:36:13.067978    3009 log.go:172] (0xc0000f5340) (0xc000556000) Stream removed, broadcasting: 5\nI0125 22:36:13.068124    3009 log.go:172] (0xc00069fae0) (1) Data frame handling\nI0125 22:36:13.068162    3009 log.go:172] (0xc00069fae0) (1) Data frame sent\nI0125 22:36:13.068239    3009 log.go:172] (0xc0000f5340) (0xc000982000) Stream removed, broadcasting: 3\nI0125 22:36:13.068278    3009 log.go:172] (0xc0000f5340) (0xc00069fae0) Stream removed, broadcasting: 1\nI0125 22:36:13.068692    3009 log.go:172] (0xc0000f5340) Go away received\nI0125 22:36:13.069375    3009 log.go:172] (0xc0000f5340) (0xc00069fae0) Stream removed, broadcasting: 1\nI0125 22:36:13.069396    3009 log.go:172] (0xc0000f5340) (0xc000982000) Stream removed, broadcasting: 3\nI0125 22:36:13.069407    3009 log.go:172] (0xc0000f5340) (0xc000556000) Stream removed, broadcasting: 5\n"
Jan 25 22:36:13.074: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jan 25 22:36:13.075: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jan 25 22:36:13.075: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8437 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 22:36:13.421: INFO: stderr: "I0125 22:36:13.258215    3029 log.go:172] (0xc000aa5550) (0xc000a5a8c0) Create stream\nI0125 22:36:13.258459    3029 log.go:172] (0xc000aa5550) (0xc000a5a8c0) Stream added, broadcasting: 1\nI0125 22:36:13.275814    3029 log.go:172] (0xc000aa5550) Reply frame received for 1\nI0125 22:36:13.275897    3029 log.go:172] (0xc000aa5550) (0xc000648640) Create stream\nI0125 22:36:13.275921    3029 log.go:172] (0xc000aa5550) (0xc000648640) Stream added, broadcasting: 3\nI0125 22:36:13.282747    3029 log.go:172] (0xc000aa5550) Reply frame received for 3\nI0125 22:36:13.282847    3029 log.go:172] (0xc000aa5550) (0xc0005cb400) Create stream\nI0125 22:36:13.282861    3029 log.go:172] (0xc000aa5550) (0xc0005cb400) Stream added, broadcasting: 5\nI0125 22:36:13.284276    3029 log.go:172] (0xc000aa5550) Reply frame received for 5\nI0125 22:36:13.345247    3029 log.go:172] (0xc000aa5550) Data frame received for 5\nI0125 22:36:13.345389    3029 log.go:172] (0xc0005cb400) (5) Data frame handling\nI0125 22:36:13.345425    3029 log.go:172] (0xc0005cb400) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0125 22:36:13.345486    3029 log.go:172] (0xc000aa5550) Data frame received for 3\nI0125 22:36:13.345521    3029 log.go:172] (0xc000648640) (3) Data frame handling\nI0125 22:36:13.345544    3029 log.go:172] (0xc000648640) (3) Data frame sent\nI0125 22:36:13.410137    3029 log.go:172] (0xc000aa5550) (0xc000648640) Stream removed, broadcasting: 3\nI0125 22:36:13.410378    3029 log.go:172] (0xc000aa5550) Data frame received for 1\nI0125 22:36:13.410596    3029 log.go:172] (0xc000aa5550) (0xc0005cb400) Stream removed, broadcasting: 5\nI0125 22:36:13.410679    3029 log.go:172] (0xc000a5a8c0) (1) Data frame handling\nI0125 22:36:13.410751    3029 log.go:172] (0xc000a5a8c0) (1) Data frame sent\nI0125 22:36:13.410769    3029 log.go:172] (0xc000aa5550) (0xc000a5a8c0) Stream removed, broadcasting: 1\nI0125 22:36:13.410787    3029 log.go:172] (0xc000aa5550) Go away received\nI0125 22:36:13.411491    3029 log.go:172] (0xc000aa5550) (0xc000a5a8c0) Stream removed, broadcasting: 1\nI0125 22:36:13.411516    3029 log.go:172] (0xc000aa5550) (0xc000648640) Stream removed, broadcasting: 3\nI0125 22:36:13.411530    3029 log.go:172] (0xc000aa5550) (0xc0005cb400) Stream removed, broadcasting: 5\n"
Jan 25 22:36:13.421: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jan 25 22:36:13.421: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jan 25 22:36:13.428: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 25 22:36:13.428: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 25 22:36:13.428: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Jan 25 22:36:13.434: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8437 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jan 25 22:36:13.777: INFO: stderr: "I0125 22:36:13.577400    3049 log.go:172] (0xc00090d6b0) (0xc000a06780) Create stream\nI0125 22:36:13.577669    3049 log.go:172] (0xc00090d6b0) (0xc000a06780) Stream added, broadcasting: 1\nI0125 22:36:13.585659    3049 log.go:172] (0xc00090d6b0) Reply frame received for 1\nI0125 22:36:13.585832    3049 log.go:172] (0xc00090d6b0) (0xc00062a640) Create stream\nI0125 22:36:13.585909    3049 log.go:172] (0xc00090d6b0) (0xc00062a640) Stream added, broadcasting: 3\nI0125 22:36:13.589821    3049 log.go:172] (0xc00090d6b0) Reply frame received for 3\nI0125 22:36:13.589867    3049 log.go:172] (0xc00090d6b0) (0xc00041b400) Create stream\nI0125 22:36:13.589889    3049 log.go:172] (0xc00090d6b0) (0xc00041b400) Stream added, broadcasting: 5\nI0125 22:36:13.591581    3049 log.go:172] (0xc00090d6b0) Reply frame received for 5\nI0125 22:36:13.677336    3049 log.go:172] (0xc00090d6b0) Data frame received for 5\nI0125 22:36:13.677422    3049 log.go:172] (0xc00041b400) (5) Data frame handling\nI0125 22:36:13.677445    3049 log.go:172] (0xc00041b400) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0125 22:36:13.678727    3049 log.go:172] (0xc00090d6b0) Data frame received for 3\nI0125 22:36:13.678826    3049 log.go:172] (0xc00062a640) (3) Data frame handling\nI0125 22:36:13.678855    3049 log.go:172] (0xc00062a640) (3) Data frame sent\nI0125 22:36:13.760752    3049 log.go:172] (0xc00090d6b0) (0xc00062a640) Stream removed, broadcasting: 3\nI0125 22:36:13.761418    3049 log.go:172] (0xc00090d6b0) Data frame received for 1\nI0125 22:36:13.761471    3049 log.go:172] (0xc000a06780) (1) Data frame handling\nI0125 22:36:13.761510    3049 log.go:172] (0xc00090d6b0) (0xc00041b400) Stream removed, broadcasting: 5\nI0125 22:36:13.761560    3049 log.go:172] (0xc000a06780) (1) Data frame sent\nI0125 22:36:13.761584    3049 log.go:172] (0xc00090d6b0) (0xc000a06780) Stream removed, broadcasting: 1\nI0125 22:36:13.762858    3049 log.go:172] (0xc00090d6b0) (0xc000a06780) Stream removed, broadcasting: 1\nI0125 22:36:13.762886    3049 log.go:172] (0xc00090d6b0) (0xc00062a640) Stream removed, broadcasting: 3\nI0125 22:36:13.762893    3049 log.go:172] (0xc00090d6b0) (0xc00041b400) Stream removed, broadcasting: 5\n"
Jan 25 22:36:13.778: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jan 25 22:36:13.778: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jan 25 22:36:13.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8437 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jan 25 22:36:14.275: INFO: stderr: "I0125 22:36:14.099623    3069 log.go:172] (0xc00052e000) (0xc00096a000) Create stream\nI0125 22:36:14.099988    3069 log.go:172] (0xc00052e000) (0xc00096a000) Stream added, broadcasting: 1\nI0125 22:36:14.104739    3069 log.go:172] (0xc00052e000) Reply frame received for 1\nI0125 22:36:14.104784    3069 log.go:172] (0xc00052e000) (0xc00096a0a0) Create stream\nI0125 22:36:14.104793    3069 log.go:172] (0xc00052e000) (0xc00096a0a0) Stream added, broadcasting: 3\nI0125 22:36:14.105626    3069 log.go:172] (0xc00052e000) Reply frame received for 3\nI0125 22:36:14.105648    3069 log.go:172] (0xc00052e000) (0xc000777680) Create stream\nI0125 22:36:14.105655    3069 log.go:172] (0xc00052e000) (0xc000777680) Stream added, broadcasting: 5\nI0125 22:36:14.107410    3069 log.go:172] (0xc00052e000) Reply frame received for 5\nI0125 22:36:14.178366    3069 log.go:172] (0xc00052e000) Data frame received for 5\nI0125 22:36:14.178440    3069 log.go:172] (0xc000777680) (5) Data frame handling\nI0125 22:36:14.178470    3069 log.go:172] (0xc000777680) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0125 22:36:14.196616    3069 log.go:172] (0xc00052e000) Data frame received for 3\nI0125 22:36:14.196656    3069 log.go:172] (0xc00096a0a0) (3) Data frame handling\nI0125 22:36:14.196672    3069 log.go:172] (0xc00096a0a0) (3) Data frame sent\nI0125 22:36:14.257046    3069 log.go:172] (0xc00052e000) Data frame received for 1\nI0125 22:36:14.257547    3069 log.go:172] (0xc00096a000) (1) Data frame handling\nI0125 22:36:14.257612    3069 log.go:172] (0xc00096a000) (1) Data frame sent\nI0125 22:36:14.259265    3069 log.go:172] (0xc00052e000) (0xc00096a000) Stream removed, broadcasting: 1\nI0125 22:36:14.259623    3069 log.go:172] (0xc00052e000) (0xc00096a0a0) Stream removed, broadcasting: 3\nI0125 22:36:14.259701    3069 log.go:172] (0xc00052e000) (0xc000777680) Stream removed, broadcasting: 5\nI0125 22:36:14.259752    3069 log.go:172] (0xc00052e000) Go away received\nI0125 22:36:14.260411    3069 log.go:172] (0xc00052e000) (0xc00096a000) Stream removed, broadcasting: 1\nI0125 22:36:14.260429    3069 log.go:172] (0xc00052e000) (0xc00096a0a0) Stream removed, broadcasting: 3\nI0125 22:36:14.260443    3069 log.go:172] (0xc00052e000) (0xc000777680) Stream removed, broadcasting: 5\n"
Jan 25 22:36:14.275: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jan 25 22:36:14.275: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jan 25 22:36:14.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8437 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jan 25 22:36:14.724: INFO: stderr: "I0125 22:36:14.499339    3090 log.go:172] (0xc000a1b130) (0xc000a423c0) Create stream\nI0125 22:36:14.499814    3090 log.go:172] (0xc000a1b130) (0xc000a423c0) Stream added, broadcasting: 1\nI0125 22:36:14.514599    3090 log.go:172] (0xc000a1b130) Reply frame received for 1\nI0125 22:36:14.514768    3090 log.go:172] (0xc000a1b130) (0xc0005c05a0) Create stream\nI0125 22:36:14.514782    3090 log.go:172] (0xc000a1b130) (0xc0005c05a0) Stream added, broadcasting: 3\nI0125 22:36:14.518470    3090 log.go:172] (0xc000a1b130) Reply frame received for 3\nI0125 22:36:14.518650    3090 log.go:172] (0xc000a1b130) (0xc00030f360) Create stream\nI0125 22:36:14.518689    3090 log.go:172] (0xc000a1b130) (0xc00030f360) Stream added, broadcasting: 5\nI0125 22:36:14.521044    3090 log.go:172] (0xc000a1b130) Reply frame received for 5\nI0125 22:36:14.603674    3090 log.go:172] (0xc000a1b130) Data frame received for 5\nI0125 22:36:14.603755    3090 log.go:172] (0xc00030f360) (5) Data frame handling\nI0125 22:36:14.603776    3090 log.go:172] (0xc00030f360) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0125 22:36:14.626596    3090 log.go:172] (0xc000a1b130) Data frame received for 3\nI0125 22:36:14.626631    3090 log.go:172] (0xc0005c05a0) (3) Data frame handling\nI0125 22:36:14.626651    3090 log.go:172] (0xc0005c05a0) (3) Data frame sent\nI0125 22:36:14.712195    3090 log.go:172] (0xc000a1b130) (0xc0005c05a0) Stream removed, broadcasting: 3\nI0125 22:36:14.712482    3090 log.go:172] (0xc000a1b130) Data frame received for 1\nI0125 22:36:14.712545    3090 log.go:172] (0xc000a423c0) (1) Data frame handling\nI0125 22:36:14.712623    3090 log.go:172] (0xc000a1b130) (0xc00030f360) Stream removed, broadcasting: 5\nI0125 22:36:14.712763    3090 log.go:172] (0xc000a423c0) (1) Data frame sent\nI0125 22:36:14.712793    3090 log.go:172] (0xc000a1b130) (0xc000a423c0) Stream removed, broadcasting: 1\nI0125 22:36:14.712830    3090 log.go:172] (0xc000a1b130) Go away received\nI0125 22:36:14.714705    3090 log.go:172] (0xc000a1b130) (0xc000a423c0) Stream removed, broadcasting: 1\nI0125 22:36:14.714824    3090 log.go:172] (0xc000a1b130) (0xc0005c05a0) Stream removed, broadcasting: 3\nI0125 22:36:14.714850    3090 log.go:172] (0xc000a1b130) (0xc00030f360) Stream removed, broadcasting: 5\n"
Jan 25 22:36:14.724: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jan 25 22:36:14.724: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jan 25 22:36:14.724: INFO: Waiting for statefulset status.replicas updated to 0
Jan 25 22:36:14.731: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Jan 25 22:36:24.742: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan 25 22:36:24.742: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Jan 25 22:36:24.742: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Jan 25 22:36:24.764: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 25 22:36:24.764: INFO: ss-0  jerma-node                 Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 22:35:41 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 22:36:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 22:36:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 22:35:41 +0000 UTC  }]
Jan 25 22:36:24.765: INFO: ss-1  jerma-server-mvvl6gufaqub  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 22:36:01 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 22:36:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 22:36:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 22:36:01 +0000 UTC  }]
Jan 25 22:36:24.765: INFO: ss-2  jerma-node                 Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 22:36:01 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 22:36:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 22:36:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 22:36:01 +0000 UTC  }]
Jan 25 22:36:24.765: INFO: 
Jan 25 22:36:24.765: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 25 22:36:26.233: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 25 22:36:26.233: INFO: ss-0  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 22:35:41 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 22:36:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 22:36:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 22:35:41 +0000 UTC  }]
Jan 25 22:36:26.234: INFO: ss-1  jerma-server-mvvl6gufaqub  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 22:36:01 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 22:36:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 22:36:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 22:36:01 +0000 UTC  }]
Jan 25 22:36:26.234: INFO: ss-2  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 22:36:01 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 22:36:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 22:36:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 22:36:01 +0000 UTC  }]
Jan 25 22:36:26.234: INFO: 
Jan 25 22:36:26.234: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 25 22:36:27.245: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 25 22:36:27.245: INFO: ss-0  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 22:35:41 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 22:36:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 22:36:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 22:35:41 +0000 UTC  }]
Jan 25 22:36:27.245: INFO: ss-1  jerma-server-mvvl6gufaqub  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 22:36:01 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 22:36:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 22:36:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 22:36:01 +0000 UTC  }]
Jan 25 22:36:27.245: INFO: ss-2  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 22:36:01 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 22:36:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 22:36:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 22:36:01 +0000 UTC  }]
Jan 25 22:36:27.245: INFO: 
Jan 25 22:36:27.245: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 25 22:36:28.625: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 25 22:36:28.625: INFO: ss-0  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 22:35:41 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 22:36:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 22:36:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 22:35:41 +0000 UTC  }]
Jan 25 22:36:28.625: INFO: ss-1  jerma-server-mvvl6gufaqub  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 22:36:01 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 22:36:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 22:36:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 22:36:01 +0000 UTC  }]
Jan 25 22:36:28.625: INFO: ss-2  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 22:36:01 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 22:36:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 22:36:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 22:36:01 +0000 UTC  }]
Jan 25 22:36:28.625: INFO: 
Jan 25 22:36:28.625: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 25 22:36:29.631: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 25 22:36:29.631: INFO: ss-0  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 22:35:41 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 22:36:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 22:36:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 22:35:41 +0000 UTC  }]
Jan 25 22:36:29.631: INFO: ss-1  jerma-server-mvvl6gufaqub  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 22:36:01 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 22:36:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 22:36:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 22:36:01 +0000 UTC  }]
Jan 25 22:36:29.631: INFO: ss-2  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 22:36:01 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 22:36:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 22:36:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 22:36:01 +0000 UTC  }]
Jan 25 22:36:29.632: INFO: 
Jan 25 22:36:29.632: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 25 22:36:30.639: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 25 22:36:30.639: INFO: ss-0  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 22:35:41 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 22:36:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 22:36:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 22:35:41 +0000 UTC  }]
Jan 25 22:36:30.639: INFO: ss-1  jerma-server-mvvl6gufaqub  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 22:36:01 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 22:36:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 22:36:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 22:36:01 +0000 UTC  }]
Jan 25 22:36:30.639: INFO: ss-2  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 22:36:01 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 22:36:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 22:36:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 22:36:01 +0000 UTC  }]
Jan 25 22:36:30.639: INFO: 
Jan 25 22:36:30.640: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 25 22:36:31.647: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Jan 25 22:36:31.647: INFO: ss-0  jerma-node  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 22:35:41 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 22:36:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 22:36:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 22:35:41 +0000 UTC  }]
Jan 25 22:36:31.647: INFO: ss-2  jerma-node  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 22:36:01 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 22:36:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 22:36:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 22:36:01 +0000 UTC  }]
Jan 25 22:36:31.647: INFO: 
Jan 25 22:36:31.647: INFO: StatefulSet ss has not reached scale 0, at 2
Jan 25 22:36:32.658: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Jan 25 22:36:32.671: INFO: ss-0  jerma-node  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 22:35:41 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 22:36:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 22:36:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 22:35:41 +0000 UTC  }]
Jan 25 22:36:32.672: INFO: ss-2  jerma-node  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 22:36:01 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 22:36:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 22:36:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 22:36:01 +0000 UTC  }]
Jan 25 22:36:32.672: INFO: 
Jan 25 22:36:32.672: INFO: StatefulSet ss has not reached scale 0, at 2
Jan 25 22:36:33.681: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Jan 25 22:36:33.681: INFO: ss-0  jerma-node  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 22:35:41 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 22:36:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 22:36:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 22:35:41 +0000 UTC  }]
Jan 25 22:36:33.681: INFO: ss-2  jerma-node  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 22:36:01 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 22:36:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 22:36:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 22:36:01 +0000 UTC  }]
Jan 25 22:36:33.681: INFO: 
Jan 25 22:36:33.681: INFO: StatefulSet ss has not reached scale 0, at 2
Jan 25 22:36:34.691: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Jan 25 22:36:34.691: INFO: ss-0  jerma-node  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 22:35:41 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 22:36:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 22:36:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 22:35:41 +0000 UTC  }]
Jan 25 22:36:34.691: INFO: ss-2  jerma-node  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 22:36:01 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 22:36:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 22:36:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 22:36:01 +0000 UTC  }]
Jan 25 22:36:34.691: INFO: 
Jan 25 22:36:34.691: INFO: StatefulSet ss has not reached scale 0, at 2
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-8437
Jan 25 22:36:35.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8437 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 22:36:35.981: INFO: rc: 1
Jan 25 22:36:35.982: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8437 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("webserver")

error:
exit status 1
Jan 25 22:36:45.983: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8437 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 22:36:46.119: INFO: rc: 1
Jan 25 22:36:46.119: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8437 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 25 22:36:56.120: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8437 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 22:36:56.291: INFO: rc: 1
Jan 25 22:36:56.292: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8437 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 25 22:37:06.293: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8437 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 22:37:06.479: INFO: rc: 1
Jan 25 22:37:06.480: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8437 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 25 22:37:16.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8437 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 22:37:16.671: INFO: rc: 1
Jan 25 22:37:16.672: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8437 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 25 22:37:26.673: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8437 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 22:37:26.853: INFO: rc: 1
Jan 25 22:37:26.854: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8437 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 25 22:37:36.854: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8437 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 22:37:37.073: INFO: rc: 1
Jan 25 22:37:37.073: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8437 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 25 22:37:47.075: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8437 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 22:37:47.202: INFO: rc: 1
Jan 25 22:37:47.203: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8437 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 25 22:37:57.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8437 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 22:37:57.416: INFO: rc: 1
Jan 25 22:37:57.417: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8437 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 25 22:38:07.418: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8437 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 22:38:07.568: INFO: rc: 1
Jan 25 22:38:07.569: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8437 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 25 22:38:17.569: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8437 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 22:38:17.761: INFO: rc: 1
Jan 25 22:38:17.761: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8437 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 25 22:38:27.762: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8437 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 22:38:28.022: INFO: rc: 1
Jan 25 22:38:28.022: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8437 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 25 22:38:38.023: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8437 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 22:38:38.161: INFO: rc: 1
Jan 25 22:38:38.161: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8437 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 25 22:38:48.162: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8437 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 22:38:48.360: INFO: rc: 1
Jan 25 22:38:48.360: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8437 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 25 22:38:58.361: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8437 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 22:38:58.628: INFO: rc: 1
Jan 25 22:38:58.629: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8437 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 25 22:39:08.630: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8437 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 22:39:08.764: INFO: rc: 1
Jan 25 22:39:08.764: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8437 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 25 22:39:18.766: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8437 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 22:39:18.973: INFO: rc: 1
Jan 25 22:39:18.974: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8437 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 25 22:39:28.974: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8437 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 22:39:29.171: INFO: rc: 1
Jan 25 22:39:29.171: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8437 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 25 22:39:39.172: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8437 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 22:39:39.395: INFO: rc: 1
Jan 25 22:39:39.396: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8437 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 25 22:39:49.396: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8437 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 22:39:49.609: INFO: rc: 1
Jan 25 22:39:49.609: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8437 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 25 22:39:59.610: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8437 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 22:39:59.786: INFO: rc: 1
Jan 25 22:39:59.786: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8437 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 25 22:40:09.787: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8437 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 22:40:10.013: INFO: rc: 1
Jan 25 22:40:10.013: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8437 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 25 22:40:20.014: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8437 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 22:40:20.236: INFO: rc: 1
Jan 25 22:40:20.236: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8437 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 25 22:40:30.236: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8437 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 22:40:30.434: INFO: rc: 1
Jan 25 22:40:30.435: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8437 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 25 22:40:40.435: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8437 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 22:40:40.751: INFO: rc: 1
Jan 25 22:40:40.751: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8437 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 25 22:40:50.752: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8437 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 22:40:50.969: INFO: rc: 1
Jan 25 22:40:50.969: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8437 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 25 22:41:00.971: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8437 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 22:41:01.162: INFO: rc: 1
Jan 25 22:41:01.163: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8437 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 25 22:41:11.164: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8437 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 22:41:11.322: INFO: rc: 1
Jan 25 22:41:11.322: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8437 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 25 22:41:21.323: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8437 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 22:41:21.516: INFO: rc: 1
Jan 25 22:41:21.516: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8437 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 25 22:41:31.517: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8437 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 22:41:31.732: INFO: rc: 1
Jan 25 22:41:31.733: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8437 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 25 22:41:41.734: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8437 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 22:41:41.953: INFO: rc: 1
Jan 25 22:41:41.954: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: 
Jan 25 22:41:41.954: INFO: Scaling statefulset ss to 0
Jan 25 22:41:41.968: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Jan 25 22:41:41.971: INFO: Deleting all statefulset in ns statefulset-8437
Jan 25 22:41:41.977: INFO: Scaling statefulset ss to 0
Jan 25 22:41:41.991: INFO: Waiting for statefulset status.replicas updated to 0
Jan 25 22:41:41.994: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:41:42.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-8437" for this suite.

• [SLOW TEST:361.129 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":278,"completed":212,"skipped":3615,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should support configurable pod DNS nameservers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:41:42.043: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support configurable pod DNS nameservers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod with dnsPolicy=None and customized dnsConfig...
Jan 25 22:41:42.222: INFO: Created pod &Pod{ObjectMeta:{dns-3125  dns-3125 /api/v1/namespaces/dns-3125/pods/dns-3125 9e4be164-b037-4afb-8599-ade954748ea7 4344965 0 2020-01-25 22:41:42 +0000 UTC   map[] map[] [] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rn4hf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rn4hf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rn4hf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
STEP: Verifying customized DNS suffix list is configured on pod...
Jan 25 22:41:50.286: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-3125 PodName:dns-3125 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 25 22:41:50.286: INFO: >>> kubeConfig: /root/.kube/config
I0125 22:41:50.346969       8 log.go:172] (0xc00056a580) (0xc0015726e0) Create stream
I0125 22:41:50.347130       8 log.go:172] (0xc00056a580) (0xc0015726e0) Stream added, broadcasting: 1
I0125 22:41:50.351533       8 log.go:172] (0xc00056a580) Reply frame received for 1
I0125 22:41:50.351603       8 log.go:172] (0xc00056a580) (0xc001f7c3c0) Create stream
I0125 22:41:50.351625       8 log.go:172] (0xc00056a580) (0xc001f7c3c0) Stream added, broadcasting: 3
I0125 22:41:50.353584       8 log.go:172] (0xc00056a580) Reply frame received for 3
I0125 22:41:50.353629       8 log.go:172] (0xc00056a580) (0xc00161e000) Create stream
I0125 22:41:50.353650       8 log.go:172] (0xc00056a580) (0xc00161e000) Stream added, broadcasting: 5
I0125 22:41:50.355595       8 log.go:172] (0xc00056a580) Reply frame received for 5
I0125 22:41:50.443957       8 log.go:172] (0xc00056a580) Data frame received for 3
I0125 22:41:50.444094       8 log.go:172] (0xc001f7c3c0) (3) Data frame handling
I0125 22:41:50.444144       8 log.go:172] (0xc001f7c3c0) (3) Data frame sent
I0125 22:41:50.576128       8 log.go:172] (0xc00056a580) Data frame received for 1
I0125 22:41:50.576311       8 log.go:172] (0xc00056a580) (0xc001f7c3c0) Stream removed, broadcasting: 3
I0125 22:41:50.576520       8 log.go:172] (0xc0015726e0) (1) Data frame handling
I0125 22:41:50.576589       8 log.go:172] (0xc0015726e0) (1) Data frame sent
I0125 22:41:50.576655       8 log.go:172] (0xc00056a580) (0xc00161e000) Stream removed, broadcasting: 5
I0125 22:41:50.576696       8 log.go:172] (0xc00056a580) (0xc0015726e0) Stream removed, broadcasting: 1
I0125 22:41:50.576739       8 log.go:172] (0xc00056a580) Go away received
I0125 22:41:50.577200       8 log.go:172] (0xc00056a580) (0xc0015726e0) Stream removed, broadcasting: 1
I0125 22:41:50.577241       8 log.go:172] (0xc00056a580) (0xc001f7c3c0) Stream removed, broadcasting: 3
I0125 22:41:50.577251       8 log.go:172] (0xc00056a580) (0xc00161e000) Stream removed, broadcasting: 5
STEP: Verifying customized DNS server is configured on pod...
Jan 25 22:41:50.577: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-3125 PodName:dns-3125 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 25 22:41:50.577: INFO: >>> kubeConfig: /root/.kube/config
I0125 22:41:50.631759       8 log.go:172] (0xc00293dc30) (0xc00167c5a0) Create stream
I0125 22:41:50.631965       8 log.go:172] (0xc00293dc30) (0xc00167c5a0) Stream added, broadcasting: 1
I0125 22:41:50.638487       8 log.go:172] (0xc00293dc30) Reply frame received for 1
I0125 22:41:50.638971       8 log.go:172] (0xc00293dc30) (0xc001778140) Create stream
I0125 22:41:50.639015       8 log.go:172] (0xc00293dc30) (0xc001778140) Stream added, broadcasting: 3
I0125 22:41:50.641005       8 log.go:172] (0xc00293dc30) Reply frame received for 3
I0125 22:41:50.641130       8 log.go:172] (0xc00293dc30) (0xc00167c6e0) Create stream
I0125 22:41:50.641152       8 log.go:172] (0xc00293dc30) (0xc00167c6e0) Stream added, broadcasting: 5
I0125 22:41:50.643300       8 log.go:172] (0xc00293dc30) Reply frame received for 5
I0125 22:41:50.730149       8 log.go:172] (0xc00293dc30) Data frame received for 3
I0125 22:41:50.730303       8 log.go:172] (0xc001778140) (3) Data frame handling
I0125 22:41:50.730331       8 log.go:172] (0xc001778140) (3) Data frame sent
I0125 22:41:50.795914       8 log.go:172] (0xc00293dc30) Data frame received for 1
I0125 22:41:50.795983       8 log.go:172] (0xc00167c5a0) (1) Data frame handling
I0125 22:41:50.796001       8 log.go:172] (0xc00167c5a0) (1) Data frame sent
I0125 22:41:50.796677       8 log.go:172] (0xc00293dc30) (0xc00167c6e0) Stream removed, broadcasting: 5
I0125 22:41:50.796789       8 log.go:172] (0xc00293dc30) (0xc00167c5a0) Stream removed, broadcasting: 1
I0125 22:41:50.797030       8 log.go:172] (0xc00293dc30) (0xc001778140) Stream removed, broadcasting: 3
I0125 22:41:50.797062       8 log.go:172] (0xc00293dc30) (0xc00167c5a0) Stream removed, broadcasting: 1
I0125 22:41:50.797070       8 log.go:172] (0xc00293dc30) (0xc001778140) Stream removed, broadcasting: 3
I0125 22:41:50.797076       8 log.go:172] (0xc00293dc30) (0xc00167c6e0) Stream removed, broadcasting: 5
Jan 25 22:41:50.797: INFO: Deleting pod dns-3125...
I0125 22:41:50.798013       8 log.go:172] (0xc00293dc30) Go away received
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:41:50.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-3125" for this suite.

• [SLOW TEST:8.867 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should support configurable pod DNS nameservers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":278,"completed":213,"skipped":3630,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:41:50.912: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jan 25 22:41:51.025: INFO: Waiting up to 5m0s for pod "downwardapi-volume-176603ad-d13d-4430-831d-aa9bcb983c00" in namespace "projected-3256" to be "success or failure"
Jan 25 22:41:51.029: INFO: Pod "downwardapi-volume-176603ad-d13d-4430-831d-aa9bcb983c00": Phase="Pending", Reason="", readiness=false. Elapsed: 4.005617ms
Jan 25 22:41:53.035: INFO: Pod "downwardapi-volume-176603ad-d13d-4430-831d-aa9bcb983c00": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00996975s
Jan 25 22:41:55.040: INFO: Pod "downwardapi-volume-176603ad-d13d-4430-831d-aa9bcb983c00": Phase="Pending", Reason="", readiness=false. Elapsed: 4.014941647s
Jan 25 22:41:57.048: INFO: Pod "downwardapi-volume-176603ad-d13d-4430-831d-aa9bcb983c00": Phase="Pending", Reason="", readiness=false. Elapsed: 6.022490709s
Jan 25 22:41:59.084: INFO: Pod "downwardapi-volume-176603ad-d13d-4430-831d-aa9bcb983c00": Phase="Pending", Reason="", readiness=false. Elapsed: 8.058735248s
Jan 25 22:42:01.092: INFO: Pod "downwardapi-volume-176603ad-d13d-4430-831d-aa9bcb983c00": Phase="Pending", Reason="", readiness=false. Elapsed: 10.066376631s
Jan 25 22:42:03.101: INFO: Pod "downwardapi-volume-176603ad-d13d-4430-831d-aa9bcb983c00": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.075861499s
STEP: Saw pod success
Jan 25 22:42:03.101: INFO: Pod "downwardapi-volume-176603ad-d13d-4430-831d-aa9bcb983c00" satisfied condition "success or failure"
Jan 25 22:42:03.111: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-176603ad-d13d-4430-831d-aa9bcb983c00 container client-container: 
STEP: delete the pod
Jan 25 22:42:03.272: INFO: Waiting for pod downwardapi-volume-176603ad-d13d-4430-831d-aa9bcb983c00 to disappear
Jan 25 22:42:03.283: INFO: Pod downwardapi-volume-176603ad-d13d-4430-831d-aa9bcb983c00 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:42:03.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3256" for this suite.

• [SLOW TEST:12.387 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":214,"skipped":3645,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with best effort scope. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:42:03.300: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify ResourceQuota with best effort scope. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a ResourceQuota with best effort scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a ResourceQuota with not best effort scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a best-effort pod
STEP: Ensuring resource quota with best effort scope captures the pod usage
STEP: Ensuring resource quota with not best effort ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
STEP: Creating a not best-effort pod
STEP: Ensuring resource quota with not best effort scope captures the pod usage
STEP: Ensuring resource quota with best effort scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:42:19.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-6628" for this suite.

• [SLOW TEST:16.581 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with best effort scope. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":278,"completed":215,"skipped":3651,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:42:19.882: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jan 25 22:42:20.025: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8f19cd29-810a-4d81-8bff-e35e4074be43" in namespace "projected-2910" to be "success or failure"
Jan 25 22:42:20.030: INFO: Pod "downwardapi-volume-8f19cd29-810a-4d81-8bff-e35e4074be43": Phase="Pending", Reason="", readiness=false. Elapsed: 4.354443ms
Jan 25 22:42:22.038: INFO: Pod "downwardapi-volume-8f19cd29-810a-4d81-8bff-e35e4074be43": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012688893s
Jan 25 22:42:24.046: INFO: Pod "downwardapi-volume-8f19cd29-810a-4d81-8bff-e35e4074be43": Phase="Pending", Reason="", readiness=false. Elapsed: 4.020458663s
Jan 25 22:42:26.054: INFO: Pod "downwardapi-volume-8f19cd29-810a-4d81-8bff-e35e4074be43": Phase="Pending", Reason="", readiness=false. Elapsed: 6.028424483s
Jan 25 22:42:28.061: INFO: Pod "downwardapi-volume-8f19cd29-810a-4d81-8bff-e35e4074be43": Phase="Pending", Reason="", readiness=false. Elapsed: 8.035848484s
Jan 25 22:42:30.078: INFO: Pod "downwardapi-volume-8f19cd29-810a-4d81-8bff-e35e4074be43": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.052996478s
STEP: Saw pod success
Jan 25 22:42:30.079: INFO: Pod "downwardapi-volume-8f19cd29-810a-4d81-8bff-e35e4074be43" satisfied condition "success or failure"
Jan 25 22:42:30.084: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-8f19cd29-810a-4d81-8bff-e35e4074be43 container client-container: 
STEP: delete the pod
Jan 25 22:42:30.621: INFO: Waiting for pod downwardapi-volume-8f19cd29-810a-4d81-8bff-e35e4074be43 to disappear
Jan 25 22:42:30.640: INFO: Pod downwardapi-volume-8f19cd29-810a-4d81-8bff-e35e4074be43 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:42:30.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2910" for this suite.

• [SLOW TEST:10.770 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":216,"skipped":3662,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:42:30.653: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Jan 25 22:42:30.820: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan 25 22:42:30.836: INFO: Waiting for terminating namespaces to be deleted...
Jan 25 22:42:30.839: INFO: 
Logging pods the kubelet thinks is on node jerma-node before test
Jan 25 22:42:30.854: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded)
Jan 25 22:42:30.854: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 25 22:42:30.854: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded)
Jan 25 22:42:30.855: INFO: 	Container weave ready: true, restart count 1
Jan 25 22:42:30.855: INFO: 	Container weave-npc ready: true, restart count 0
Jan 25 22:42:30.855: INFO: 
Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test
Jan 25 22:42:30.884: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Jan 25 22:42:30.884: INFO: 	Container etcd ready: true, restart count 1
Jan 25 22:42:30.884: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Jan 25 22:42:30.884: INFO: 	Container kube-apiserver ready: true, restart count 1
Jan 25 22:42:30.884: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Jan 25 22:42:30.884: INFO: 	Container coredns ready: true, restart count 0
Jan 25 22:42:30.884: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Jan 25 22:42:30.884: INFO: 	Container coredns ready: true, restart count 0
Jan 25 22:42:30.884: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded)
Jan 25 22:42:30.884: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 25 22:42:30.884: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded)
Jan 25 22:42:30.884: INFO: 	Container weave ready: true, restart count 0
Jan 25 22:42:30.884: INFO: 	Container weave-npc ready: true, restart count 0
Jan 25 22:42:30.884: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Jan 25 22:42:30.884: INFO: 	Container kube-controller-manager ready: true, restart count 3
Jan 25 22:42:30.884: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Jan 25 22:42:30.884: INFO: 	Container kube-scheduler ready: true, restart count 4
[It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-f5246280-6309-48f9-9dae-d1e294504f90 90
STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled
STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled
STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides
STEP: removing the label kubernetes.io/e2e-f5246280-6309-48f9-9dae-d1e294504f90 off the node jerma-node
STEP: verifying the node doesn't have the label kubernetes.io/e2e-f5246280-6309-48f9-9dae-d1e294504f90
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:43:07.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-9253" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77

• [SLOW TEST:36.630 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":278,"completed":217,"skipped":3687,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:43:07.285: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test hostPath mode
Jan 25 22:43:07.383: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-6474" to be "success or failure"
Jan 25 22:43:07.388: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 5.267908ms
Jan 25 22:43:09.398: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015805037s
Jan 25 22:43:11.410: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027445927s
Jan 25 22:43:13.416: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.033125178s
Jan 25 22:43:15.474: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.091511064s
Jan 25 22:43:17.481: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.098638139s
Jan 25 22:43:19.491: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 12.108479381s
Jan 25 22:43:21.497: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.114807684s
STEP: Saw pod success
Jan 25 22:43:21.498: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Jan 25 22:43:21.501: INFO: Trying to get logs from node jerma-node pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Jan 25 22:43:21.651: INFO: Waiting for pod pod-host-path-test to disappear
Jan 25 22:43:21.664: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:43:21.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-6474" for this suite.

• [SLOW TEST:14.563 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":218,"skipped":3703,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:43:21.851: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jan 25 22:43:21.992: INFO: Waiting up to 5m0s for pod "downwardapi-volume-70ed4121-1dbc-4e19-905f-61acea201c5a" in namespace "downward-api-2010" to be "success or failure"
Jan 25 22:43:22.000: INFO: Pod "downwardapi-volume-70ed4121-1dbc-4e19-905f-61acea201c5a": Phase="Pending", Reason="", readiness=false. Elapsed: 7.782761ms
Jan 25 22:43:24.009: INFO: Pod "downwardapi-volume-70ed4121-1dbc-4e19-905f-61acea201c5a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016815159s
Jan 25 22:43:26.015: INFO: Pod "downwardapi-volume-70ed4121-1dbc-4e19-905f-61acea201c5a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022475773s
Jan 25 22:43:28.021: INFO: Pod "downwardapi-volume-70ed4121-1dbc-4e19-905f-61acea201c5a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.028621092s
Jan 25 22:43:30.028: INFO: Pod "downwardapi-volume-70ed4121-1dbc-4e19-905f-61acea201c5a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.035753944s
Jan 25 22:43:32.053: INFO: Pod "downwardapi-volume-70ed4121-1dbc-4e19-905f-61acea201c5a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.060440712s
STEP: Saw pod success
Jan 25 22:43:32.062: INFO: Pod "downwardapi-volume-70ed4121-1dbc-4e19-905f-61acea201c5a" satisfied condition "success or failure"
Jan 25 22:43:32.075: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-70ed4121-1dbc-4e19-905f-61acea201c5a container client-container: 
STEP: delete the pod
Jan 25 22:43:32.259: INFO: Waiting for pod downwardapi-volume-70ed4121-1dbc-4e19-905f-61acea201c5a to disappear
Jan 25 22:43:32.269: INFO: Pod downwardapi-volume-70ed4121-1dbc-4e19-905f-61acea201c5a no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:43:32.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2010" for this suite.

• [SLOW TEST:10.431 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":219,"skipped":3777,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:43:32.283: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-9d42e9c7-89c8-426a-b955-b07310cff507
STEP: Creating a pod to test consume secrets
Jan 25 22:43:32.441: INFO: Waiting up to 5m0s for pod "pod-secrets-3a947a28-13a7-4886-a724-3927ca624965" in namespace "secrets-596" to be "success or failure"
Jan 25 22:43:32.491: INFO: Pod "pod-secrets-3a947a28-13a7-4886-a724-3927ca624965": Phase="Pending", Reason="", readiness=false. Elapsed: 50.077702ms
Jan 25 22:43:34.501: INFO: Pod "pod-secrets-3a947a28-13a7-4886-a724-3927ca624965": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059425977s
Jan 25 22:43:36.513: INFO: Pod "pod-secrets-3a947a28-13a7-4886-a724-3927ca624965": Phase="Pending", Reason="", readiness=false. Elapsed: 4.072169772s
Jan 25 22:43:38.526: INFO: Pod "pod-secrets-3a947a28-13a7-4886-a724-3927ca624965": Phase="Pending", Reason="", readiness=false. Elapsed: 6.084497006s
Jan 25 22:43:40.533: INFO: Pod "pod-secrets-3a947a28-13a7-4886-a724-3927ca624965": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.091535463s
STEP: Saw pod success
Jan 25 22:43:40.533: INFO: Pod "pod-secrets-3a947a28-13a7-4886-a724-3927ca624965" satisfied condition "success or failure"
Jan 25 22:43:40.537: INFO: Trying to get logs from node jerma-node pod pod-secrets-3a947a28-13a7-4886-a724-3927ca624965 container secret-env-test: 
STEP: delete the pod
Jan 25 22:43:40.592: INFO: Waiting for pod pod-secrets-3a947a28-13a7-4886-a724-3927ca624965 to disappear
Jan 25 22:43:40.670: INFO: Pod pod-secrets-3a947a28-13a7-4886-a724-3927ca624965 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:43:40.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-596" for this suite.

• [SLOW TEST:8.400 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":278,"completed":220,"skipped":3781,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:43:40.685: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-4429
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a new StatefulSet
Jan 25 22:43:40.976: INFO: Found 0 stateful pods, waiting for 3
Jan 25 22:43:50.981: INFO: Found 2 stateful pods, waiting for 3
Jan 25 22:44:00.986: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 25 22:44:00.986: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 25 22:44:00.986: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan 25 22:44:10.985: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 25 22:44:10.985: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 25 22:44:10.985: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine
Jan 25 22:44:11.018: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Jan 25 22:44:21.107: INFO: Updating stateful set ss2
Jan 25 22:44:21.117: INFO: Waiting for Pod statefulset-4429/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jan 25 22:44:31.131: INFO: Waiting for Pod statefulset-4429/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
STEP: Restoring Pods to the correct revision when they are deleted
Jan 25 22:44:41.395: INFO: Found 2 stateful pods, waiting for 3
Jan 25 22:44:51.406: INFO: Found 2 stateful pods, waiting for 3
Jan 25 22:45:01.406: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 25 22:45:01.406: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 25 22:45:01.406: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=false
Jan 25 22:45:11.405: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 25 22:45:11.405: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 25 22:45:11.405: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Jan 25 22:45:11.438: INFO: Updating stateful set ss2
Jan 25 22:45:11.510: INFO: Waiting for Pod statefulset-4429/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jan 25 22:45:21.524: INFO: Waiting for Pod statefulset-4429/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jan 25 22:45:31.552: INFO: Updating stateful set ss2
Jan 25 22:45:31.619: INFO: Waiting for StatefulSet statefulset-4429/ss2 to complete update
Jan 25 22:45:31.620: INFO: Waiting for Pod statefulset-4429/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jan 25 22:45:41.629: INFO: Waiting for StatefulSet statefulset-4429/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Jan 25 22:45:51.634: INFO: Deleting all statefulset in ns statefulset-4429
Jan 25 22:45:51.637: INFO: Scaling statefulset ss2 to 0
Jan 25 22:46:21.685: INFO: Waiting for statefulset status.replicas updated to 0
Jan 25 22:46:21.689: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:46:21.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-4429" for this suite.

• [SLOW TEST:161.058 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":278,"completed":221,"skipped":3824,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:46:21.744: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Jan 25 22:46:21.896: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-3399 /api/v1/namespaces/watch-3399/configmaps/e2e-watch-test-watch-closed 0da5bcff-e47d-4621-8319-7385cb3927c2 4346126 0 2020-01-25 22:46:21 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan 25 22:46:21.896: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-3399 /api/v1/namespaces/watch-3399/configmaps/e2e-watch-test-watch-closed 0da5bcff-e47d-4621-8319-7385cb3927c2 4346127 0 2020-01-25 22:46:21 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Jan 25 22:46:21.912: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-3399 /api/v1/namespaces/watch-3399/configmaps/e2e-watch-test-watch-closed 0da5bcff-e47d-4621-8319-7385cb3927c2 4346128 0 2020-01-25 22:46:21 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan 25 22:46:21.913: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-3399 /api/v1/namespaces/watch-3399/configmaps/e2e-watch-test-watch-closed 0da5bcff-e47d-4621-8319-7385cb3927c2 4346129 0 2020-01-25 22:46:21 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:46:21.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-3399" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":278,"completed":222,"skipped":3826,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}

------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:46:21.921: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 25 22:46:22.025: INFO: Creating deployment "test-recreate-deployment"
Jan 25 22:46:22.044: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Jan 25 22:46:22.111: INFO: deployment "test-recreate-deployment" doesn't have the required revision set
Jan 25 22:46:24.134: INFO: Waiting deployment "test-recreate-deployment" to complete
Jan 25 22:46:24.137: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715589182, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715589182, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715589182, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715589182, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 22:46:26.145: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715589182, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715589182, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715589182, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715589182, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 22:46:28.153: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715589182, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715589182, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715589182, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715589182, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 22:46:30.144: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Jan 25 22:46:30.157: INFO: Updating deployment test-recreate-deployment
Jan 25 22:46:30.157: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Jan 25 22:46:31.331: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:{test-recreate-deployment  deployment-3093 /apis/apps/v1/namespaces/deployment-3093/deployments/test-recreate-deployment a4424c27-e36b-41c1-8d09-c0fac21809cf 4346260 2 2020-01-25 22:46:22 +0000 UTC   map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004cd0158  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-01-25 22:46:31 +0000 UTC,LastTransitionTime:2020-01-25 22:46:31 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-01-25 22:46:31 +0000 UTC,LastTransitionTime:2020-01-25 22:46:22 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},}

Jan 25 22:46:31.417: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff  deployment-3093 /apis/apps/v1/namespaces/deployment-3093/replicasets/test-recreate-deployment-5f94c574ff c0a1854f-0322-4cd7-a97c-fd6890cc19ab 4346258 1 2020-01-25 22:46:31 +0000 UTC   map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment a4424c27-e36b-41c1-8d09-c0fac21809cf 0xc004cd04f7 0xc004cd04f8}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004cd0558  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Jan 25 22:46:31.417: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Jan 25 22:46:31.417: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-799c574856  deployment-3093 /apis/apps/v1/namespaces/deployment-3093/replicasets/test-recreate-deployment-799c574856 85edcde4-a2ca-4875-bce2-58079d451037 4346249 2 2020-01-25 22:46:22 +0000 UTC   map[name:sample-pod-3 pod-template-hash:799c574856] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment a4424c27-e36b-41c1-8d09-c0fac21809cf 0xc004cd05c7 0xc004cd05c8}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 799c574856,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3 pod-template-hash:799c574856] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004cd0638  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Jan 25 22:46:31.425: INFO: Pod "test-recreate-deployment-5f94c574ff-6hld9" is not available:
&Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-6hld9 test-recreate-deployment-5f94c574ff- deployment-3093 /api/v1/namespaces/deployment-3093/pods/test-recreate-deployment-5f94c574ff-6hld9 cae5fc06-696c-4d67-b5f4-e82fdb4d0e3e 4346261 0 2020-01-25 22:46:31 +0000 UTC   map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff c0a1854f-0322-4cd7-a97c-fd6890cc19ab 0xc004cd0aa7 0xc004cd0aa8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4299j,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4299j,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4299j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 22:46:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 22:46:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 22:46:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 22:46:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-01-25 22:46:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:46:31.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-3093" for this suite.

• [SLOW TEST:9.527 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":223,"skipped":3826,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:46:31.448: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating the pod
Jan 25 22:46:44.293: INFO: Successfully updated pod "annotationupdatebefd5332-7f36-45f6-b737-0e07831a485e"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:46:48.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3425" for this suite.

• [SLOW TEST:16.929 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":224,"skipped":3833,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:46:48.378: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a secret. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Discovering how many secrets are in namespace by default
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Secret
STEP: Ensuring resource quota status captures secret creation
STEP: Deleting a secret
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:47:05.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-89" for this suite.

• [SLOW TEST:17.582 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":278,"completed":225,"skipped":3835,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:47:05.961: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jan 25 22:47:14.155: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:47:14.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-944" for this suite.

• [SLOW TEST:8.245 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":226,"skipped":3837,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:47:14.206: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:47:26.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-8873" for this suite.

• [SLOW TEST:12.190 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":278,"completed":227,"skipped":3841,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:47:26.397: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-map-e6e0b806-e047-4c0a-a367-4dd1a04b2bfe
STEP: Creating a pod to test consume secrets
Jan 25 22:47:26.545: INFO: Waiting up to 5m0s for pod "pod-secrets-1bd00107-afe1-4bd6-871f-274d878d8e25" in namespace "secrets-596" to be "success or failure"
Jan 25 22:47:26.561: INFO: Pod "pod-secrets-1bd00107-afe1-4bd6-871f-274d878d8e25": Phase="Pending", Reason="", readiness=false. Elapsed: 15.463702ms
Jan 25 22:47:28.571: INFO: Pod "pod-secrets-1bd00107-afe1-4bd6-871f-274d878d8e25": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025807599s
Jan 25 22:47:30.581: INFO: Pod "pod-secrets-1bd00107-afe1-4bd6-871f-274d878d8e25": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036093916s
Jan 25 22:47:32.591: INFO: Pod "pod-secrets-1bd00107-afe1-4bd6-871f-274d878d8e25": Phase="Pending", Reason="", readiness=false. Elapsed: 6.045382613s
Jan 25 22:47:34.623: INFO: Pod "pod-secrets-1bd00107-afe1-4bd6-871f-274d878d8e25": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.077928488s
STEP: Saw pod success
Jan 25 22:47:34.623: INFO: Pod "pod-secrets-1bd00107-afe1-4bd6-871f-274d878d8e25" satisfied condition "success or failure"
Jan 25 22:47:34.627: INFO: Trying to get logs from node jerma-node pod pod-secrets-1bd00107-afe1-4bd6-871f-274d878d8e25 container secret-volume-test: 
STEP: delete the pod
Jan 25 22:47:34.766: INFO: Waiting for pod pod-secrets-1bd00107-afe1-4bd6-871f-274d878d8e25 to disappear
Jan 25 22:47:34.770: INFO: Pod pod-secrets-1bd00107-afe1-4bd6-871f-274d878d8e25 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:47:34.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-596" for this suite.

• [SLOW TEST:8.381 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":228,"skipped":3843,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:47:34.779: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 25 22:47:34.839: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:47:35.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-6050" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works  [Conformance]","total":278,"completed":229,"skipped":3861,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}

------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:47:36.003: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD preserving unknown fields in an embedded object [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 25 22:47:36.090: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Jan 25 22:47:39.022: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4998 create -f -'
Jan 25 22:47:41.595: INFO: stderr: ""
Jan 25 22:47:41.595: INFO: stdout: "e2e-test-crd-publish-openapi-5721-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n"
Jan 25 22:47:41.596: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4998 delete e2e-test-crd-publish-openapi-5721-crds test-cr'
Jan 25 22:47:41.735: INFO: stderr: ""
Jan 25 22:47:41.735: INFO: stdout: "e2e-test-crd-publish-openapi-5721-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n"
Jan 25 22:47:41.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4998 apply -f -'
Jan 25 22:47:42.154: INFO: stderr: ""
Jan 25 22:47:42.154: INFO: stdout: "e2e-test-crd-publish-openapi-5721-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n"
Jan 25 22:47:42.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4998 delete e2e-test-crd-publish-openapi-5721-crds test-cr'
Jan 25 22:47:42.325: INFO: stderr: ""
Jan 25 22:47:42.325: INFO: stdout: "e2e-test-crd-publish-openapi-5721-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR
Jan 25 22:47:42.325: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5721-crds'
Jan 25 22:47:42.704: INFO: stderr: ""
Jan 25 22:47:42.704: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-5721-crd\nVERSION:  crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n     preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n   apiVersion\t\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t\n     Specification of Waldo\n\n   status\t\n     Status of Waldo\n\n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:47:46.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-4998" for this suite.

• [SLOW TEST:10.394 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":278,"completed":230,"skipped":3861,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
S
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:47:46.398: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jan 25 22:47:55.131: INFO: Successfully updated pod "pod-update-activedeadlineseconds-e5444120-6929-4781-a4ef-060ebef02e70"
Jan 25 22:47:55.131: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-e5444120-6929-4781-a4ef-060ebef02e70" in namespace "pods-7048" to be "terminated due to deadline exceeded"
Jan 25 22:47:55.141: INFO: Pod "pod-update-activedeadlineseconds-e5444120-6929-4781-a4ef-060ebef02e70": Phase="Running", Reason="", readiness=true. Elapsed: 9.646162ms
Jan 25 22:47:57.147: INFO: Pod "pod-update-activedeadlineseconds-e5444120-6929-4781-a4ef-060ebef02e70": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.016270016s
Jan 25 22:47:57.147: INFO: Pod "pod-update-activedeadlineseconds-e5444120-6929-4781-a4ef-060ebef02e70" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:47:57.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7048" for this suite.

• [SLOW TEST:10.769 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":278,"completed":231,"skipped":3862,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:47:57.168: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating the pod
Jan 25 22:48:07.870: INFO: Successfully updated pod "labelsupdatef6d122b0-7110-493f-a8ad-0dad75b134a6"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:48:09.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5573" for this suite.

• [SLOW TEST:12.767 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":232,"skipped":3899,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:48:09.936: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jan 25 22:48:18.562: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:48:18.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-2224" for this suite.

• [SLOW TEST:8.828 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":233,"skipped":3913,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:48:18.766: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Jan 25 22:48:18.874: INFO: Waiting up to 5m0s for pod "downward-api-ed7f7b02-6cf6-4f0d-b0cc-4f41d599234a" in namespace "downward-api-8411" to be "success or failure"
Jan 25 22:48:18.898: INFO: Pod "downward-api-ed7f7b02-6cf6-4f0d-b0cc-4f41d599234a": Phase="Pending", Reason="", readiness=false. Elapsed: 23.99395ms
Jan 25 22:48:20.904: INFO: Pod "downward-api-ed7f7b02-6cf6-4f0d-b0cc-4f41d599234a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02965667s
Jan 25 22:48:22.911: INFO: Pod "downward-api-ed7f7b02-6cf6-4f0d-b0cc-4f41d599234a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036866776s
Jan 25 22:48:24.921: INFO: Pod "downward-api-ed7f7b02-6cf6-4f0d-b0cc-4f41d599234a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046977719s
Jan 25 22:48:26.926: INFO: Pod "downward-api-ed7f7b02-6cf6-4f0d-b0cc-4f41d599234a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.051965202s
Jan 25 22:48:28.940: INFO: Pod "downward-api-ed7f7b02-6cf6-4f0d-b0cc-4f41d599234a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.065449597s
STEP: Saw pod success
Jan 25 22:48:28.940: INFO: Pod "downward-api-ed7f7b02-6cf6-4f0d-b0cc-4f41d599234a" satisfied condition "success or failure"
Jan 25 22:48:28.944: INFO: Trying to get logs from node jerma-node pod downward-api-ed7f7b02-6cf6-4f0d-b0cc-4f41d599234a container dapi-container: 
STEP: delete the pod
Jan 25 22:48:29.070: INFO: Waiting for pod downward-api-ed7f7b02-6cf6-4f0d-b0cc-4f41d599234a to disappear
Jan 25 22:48:29.083: INFO: Pod downward-api-ed7f7b02-6cf6-4f0d-b0cc-4f41d599234a no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:48:29.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8411" for this suite.

• [SLOW TEST:10.326 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":278,"completed":234,"skipped":3920,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:48:29.093: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0125 22:49:11.379528       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 25 22:49:11.379: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:49:11.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-8912" for this suite.

• [SLOW TEST:42.302 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":278,"completed":235,"skipped":3932,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:49:11.396: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-7906
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace statefulset-7906
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-7906
Jan 25 22:49:11.513: INFO: Found 0 stateful pods, waiting for 1
Jan 25 22:49:22.591: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false
Jan 25 22:49:31.522: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Jan 25 22:49:31.527: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7906 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jan 25 22:49:32.009: INFO: stderr: "I0125 22:49:31.760500    3843 log.go:172] (0xc000912000) (0xc0006c7ae0) Create stream\nI0125 22:49:31.760813    3843 log.go:172] (0xc000912000) (0xc0006c7ae0) Stream added, broadcasting: 1\nI0125 22:49:31.766447    3843 log.go:172] (0xc000912000) Reply frame received for 1\nI0125 22:49:31.766493    3843 log.go:172] (0xc000912000) (0xc0004ca000) Create stream\nI0125 22:49:31.766504    3843 log.go:172] (0xc000912000) (0xc0004ca000) Stream added, broadcasting: 3\nI0125 22:49:31.768010    3843 log.go:172] (0xc000912000) Reply frame received for 3\nI0125 22:49:31.768041    3843 log.go:172] (0xc000912000) (0xc0008f6000) Create stream\nI0125 22:49:31.768051    3843 log.go:172] (0xc000912000) (0xc0008f6000) Stream added, broadcasting: 5\nI0125 22:49:31.769590    3843 log.go:172] (0xc000912000) Reply frame received for 5\nI0125 22:49:31.880217    3843 log.go:172] (0xc000912000) Data frame received for 5\nI0125 22:49:31.880340    3843 log.go:172] (0xc0008f6000) (5) Data frame handling\nI0125 22:49:31.880380    3843 log.go:172] (0xc0008f6000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0125 22:49:31.901119    3843 log.go:172] (0xc000912000) Data frame received for 3\nI0125 22:49:31.901206    3843 log.go:172] (0xc0004ca000) (3) Data frame handling\nI0125 22:49:31.901255    3843 log.go:172] (0xc0004ca000) (3) Data frame sent\nI0125 22:49:31.999796    3843 log.go:172] (0xc000912000) Data frame received for 1\nI0125 22:49:31.999998    3843 log.go:172] (0xc0006c7ae0) (1) Data frame handling\nI0125 22:49:32.000023    3843 log.go:172] (0xc0006c7ae0) (1) Data frame sent\nI0125 22:49:32.000065    3843 log.go:172] (0xc000912000) (0xc0006c7ae0) Stream removed, broadcasting: 1\nI0125 22:49:32.000343    3843 log.go:172] (0xc000912000) (0xc0004ca000) Stream removed, broadcasting: 3\nI0125 22:49:32.000564    3843 log.go:172] (0xc000912000) (0xc0008f6000) Stream removed, broadcasting: 5\nI0125 22:49:32.000643    3843 log.go:172] (0xc000912000) Go away received\nI0125 22:49:32.000865    3843 log.go:172] (0xc000912000) (0xc0006c7ae0) Stream removed, broadcasting: 1\nI0125 22:49:32.000934    3843 log.go:172] (0xc000912000) (0xc0004ca000) Stream removed, broadcasting: 3\nI0125 22:49:32.000969    3843 log.go:172] (0xc000912000) (0xc0008f6000) Stream removed, broadcasting: 5\n"
Jan 25 22:49:32.009: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jan 25 22:49:32.009: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jan 25 22:49:32.015: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Jan 25 22:49:42.026: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan 25 22:49:42.026: INFO: Waiting for statefulset status.replicas updated to 0
Jan 25 22:49:42.098: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999997582s
Jan 25 22:49:43.108: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.959275593s
Jan 25 22:49:44.131: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.949950895s
Jan 25 22:49:45.136: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.926864508s
Jan 25 22:49:46.156: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.921820567s
Jan 25 22:49:47.170: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.902116769s
Jan 25 22:49:48.177: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.887741583s
Jan 25 22:49:49.185: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.880532952s
Jan 25 22:49:50.192: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.872750499s
Jan 25 22:49:51.199: INFO: Verifying statefulset ss doesn't scale past 1 for another 865.98266ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-7906
Jan 25 22:49:52.205: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7906 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 22:49:52.646: INFO: stderr: "I0125 22:49:52.445737    3865 log.go:172] (0xc00093e000) (0xc000aac0a0) Create stream\nI0125 22:49:52.446114    3865 log.go:172] (0xc00093e000) (0xc000aac0a0) Stream added, broadcasting: 1\nI0125 22:49:52.449802    3865 log.go:172] (0xc00093e000) Reply frame received for 1\nI0125 22:49:52.449899    3865 log.go:172] (0xc00093e000) (0xc000a0a280) Create stream\nI0125 22:49:52.449919    3865 log.go:172] (0xc00093e000) (0xc000a0a280) Stream added, broadcasting: 3\nI0125 22:49:52.452954    3865 log.go:172] (0xc00093e000) Reply frame received for 3\nI0125 22:49:52.453001    3865 log.go:172] (0xc00093e000) (0xc000a3e0a0) Create stream\nI0125 22:49:52.453010    3865 log.go:172] (0xc00093e000) (0xc000a3e0a0) Stream added, broadcasting: 5\nI0125 22:49:52.454785    3865 log.go:172] (0xc00093e000) Reply frame received for 5\nI0125 22:49:52.549646    3865 log.go:172] (0xc00093e000) Data frame received for 3\nI0125 22:49:52.549985    3865 log.go:172] (0xc000a0a280) (3) Data frame handling\nI0125 22:49:52.550061    3865 log.go:172] (0xc000a0a280) (3) Data frame sent\nI0125 22:49:52.550216    3865 log.go:172] (0xc00093e000) Data frame received for 5\nI0125 22:49:52.550244    3865 log.go:172] (0xc000a3e0a0) (5) Data frame handling\nI0125 22:49:52.550264    3865 log.go:172] (0xc000a3e0a0) (5) Data frame sent\nI0125 22:49:52.550310    3865 log.go:172] (0xc00093e000) Data frame received for 5\nI0125 22:49:52.550330    3865 log.go:172] (0xc000a3e0a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0125 22:49:52.550424    3865 log.go:172] (0xc000a3e0a0) (5) Data frame sent\nI0125 22:49:52.631288    3865 log.go:172] (0xc00093e000) Data frame received for 1\nI0125 22:49:52.631758    3865 log.go:172] (0xc00093e000) (0xc000a0a280) Stream removed, broadcasting: 3\nI0125 22:49:52.632042    3865 log.go:172] (0xc00093e000) (0xc000a3e0a0) Stream removed, broadcasting: 5\nI0125 22:49:52.632306    3865 log.go:172] (0xc000aac0a0) (1) Data frame handling\nI0125 22:49:52.632395    3865 log.go:172] (0xc000aac0a0) (1) Data frame sent\nI0125 22:49:52.632517    3865 log.go:172] (0xc00093e000) (0xc000aac0a0) Stream removed, broadcasting: 1\nI0125 22:49:52.632563    3865 log.go:172] (0xc00093e000) Go away received\nI0125 22:49:52.634315    3865 log.go:172] (0xc00093e000) (0xc000aac0a0) Stream removed, broadcasting: 1\nI0125 22:49:52.634340    3865 log.go:172] (0xc00093e000) (0xc000a0a280) Stream removed, broadcasting: 3\nI0125 22:49:52.634351    3865 log.go:172] (0xc00093e000) (0xc000a3e0a0) Stream removed, broadcasting: 5\n"
Jan 25 22:49:52.646: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jan 25 22:49:52.646: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jan 25 22:49:52.654: INFO: Found 1 stateful pods, waiting for 3
Jan 25 22:50:02.670: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 25 22:50:02.671: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 25 22:50:02.671: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan 25 22:50:12.720: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 25 22:50:12.720: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 25 22:50:12.720: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Jan 25 22:50:12.732: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7906 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jan 25 22:50:13.112: INFO: stderr: "I0125 22:50:12.932419    3888 log.go:172] (0xc000a1ee70) (0xc000663c20) Create stream\nI0125 22:50:12.932684    3888 log.go:172] (0xc000a1ee70) (0xc000663c20) Stream added, broadcasting: 1\nI0125 22:50:12.948522    3888 log.go:172] (0xc000a1ee70) Reply frame received for 1\nI0125 22:50:12.948861    3888 log.go:172] (0xc000a1ee70) (0xc000ae0140) Create stream\nI0125 22:50:12.948919    3888 log.go:172] (0xc000a1ee70) (0xc000ae0140) Stream added, broadcasting: 3\nI0125 22:50:12.952441    3888 log.go:172] (0xc000a1ee70) Reply frame received for 3\nI0125 22:50:12.952506    3888 log.go:172] (0xc000a1ee70) (0xc0009f2280) Create stream\nI0125 22:50:12.952521    3888 log.go:172] (0xc000a1ee70) (0xc0009f2280) Stream added, broadcasting: 5\nI0125 22:50:12.954159    3888 log.go:172] (0xc000a1ee70) Reply frame received for 5\nI0125 22:50:13.022604    3888 log.go:172] (0xc000a1ee70) Data frame received for 5\nI0125 22:50:13.022688    3888 log.go:172] (0xc0009f2280) (5) Data frame handling\nI0125 22:50:13.022712    3888 log.go:172] (0xc0009f2280) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0125 22:50:13.022744    3888 log.go:172] (0xc000a1ee70) Data frame received for 3\nI0125 22:50:13.022755    3888 log.go:172] (0xc000ae0140) (3) Data frame handling\nI0125 22:50:13.022772    3888 log.go:172] (0xc000ae0140) (3) Data frame sent\nI0125 22:50:13.092845    3888 log.go:172] (0xc000a1ee70) (0xc000ae0140) Stream removed, broadcasting: 3\nI0125 22:50:13.093031    3888 log.go:172] (0xc000a1ee70) Data frame received for 1\nI0125 22:50:13.093065    3888 log.go:172] (0xc000663c20) (1) Data frame handling\nI0125 22:50:13.093103    3888 log.go:172] (0xc000663c20) (1) Data frame sent\nI0125 22:50:13.093113    3888 log.go:172] (0xc000a1ee70) (0xc0009f2280) Stream removed, broadcasting: 5\nI0125 22:50:13.093178    3888 log.go:172] (0xc000a1ee70) (0xc000663c20) Stream removed, broadcasting: 1\nI0125 22:50:13.093193    3888 log.go:172] (0xc000a1ee70) Go away received\nI0125 22:50:13.101430    3888 log.go:172] (0xc000a1ee70) (0xc000663c20) Stream removed, broadcasting: 1\nI0125 22:50:13.101781    3888 log.go:172] (0xc000a1ee70) (0xc000ae0140) Stream removed, broadcasting: 3\nI0125 22:50:13.101904    3888 log.go:172] (0xc000a1ee70) (0xc0009f2280) Stream removed, broadcasting: 5\n"
Jan 25 22:50:13.113: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jan 25 22:50:13.113: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jan 25 22:50:13.113: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7906 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jan 25 22:50:13.408: INFO: stderr: "I0125 22:50:13.253166    3908 log.go:172] (0xc000a6c2c0) (0xc000647d60) Create stream\nI0125 22:50:13.253346    3908 log.go:172] (0xc000a6c2c0) (0xc000647d60) Stream added, broadcasting: 1\nI0125 22:50:13.256444    3908 log.go:172] (0xc000a6c2c0) Reply frame received for 1\nI0125 22:50:13.256482    3908 log.go:172] (0xc000a6c2c0) (0xc000b54280) Create stream\nI0125 22:50:13.256496    3908 log.go:172] (0xc000a6c2c0) (0xc000b54280) Stream added, broadcasting: 3\nI0125 22:50:13.257467    3908 log.go:172] (0xc000a6c2c0) Reply frame received for 3\nI0125 22:50:13.257488    3908 log.go:172] (0xc000a6c2c0) (0xc000647e00) Create stream\nI0125 22:50:13.257497    3908 log.go:172] (0xc000a6c2c0) (0xc000647e00) Stream added, broadcasting: 5\nI0125 22:50:13.258905    3908 log.go:172] (0xc000a6c2c0) Reply frame received for 5\nI0125 22:50:13.309653    3908 log.go:172] (0xc000a6c2c0) Data frame received for 5\nI0125 22:50:13.309726    3908 log.go:172] (0xc000647e00) (5) Data frame handling\nI0125 22:50:13.309752    3908 log.go:172] (0xc000647e00) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0125 22:50:13.326523    3908 log.go:172] (0xc000a6c2c0) Data frame received for 3\nI0125 22:50:13.326577    3908 log.go:172] (0xc000b54280) (3) Data frame handling\nI0125 22:50:13.326606    3908 log.go:172] (0xc000b54280) (3) Data frame sent\nI0125 22:50:13.397309    3908 log.go:172] (0xc000a6c2c0) (0xc000647e00) Stream removed, broadcasting: 5\nI0125 22:50:13.397588    3908 log.go:172] (0xc000a6c2c0) (0xc000b54280) Stream removed, broadcasting: 3\nI0125 22:50:13.397741    3908 log.go:172] (0xc000a6c2c0) Data frame received for 1\nI0125 22:50:13.397764    3908 log.go:172] (0xc000647d60) (1) Data frame handling\nI0125 22:50:13.397778    3908 log.go:172] (0xc000647d60) (1) Data frame sent\nI0125 22:50:13.397793    3908 log.go:172] (0xc000a6c2c0) (0xc000647d60) Stream removed, broadcasting: 1\nI0125 22:50:13.397838    3908 log.go:172] (0xc000a6c2c0) Go away received\nI0125 22:50:13.398444    3908 log.go:172] (0xc000a6c2c0) (0xc000647d60) Stream removed, broadcasting: 1\nI0125 22:50:13.398458    3908 log.go:172] (0xc000a6c2c0) (0xc000b54280) Stream removed, broadcasting: 3\nI0125 22:50:13.398466    3908 log.go:172] (0xc000a6c2c0) (0xc000647e00) Stream removed, broadcasting: 5\n"
Jan 25 22:50:13.408: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jan 25 22:50:13.408: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jan 25 22:50:13.408: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7906 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jan 25 22:50:13.983: INFO: stderr: "I0125 22:50:13.622929    3927 log.go:172] (0xc000b1b290) (0xc000972500) Create stream\nI0125 22:50:13.623392    3927 log.go:172] (0xc000b1b290) (0xc000972500) Stream added, broadcasting: 1\nI0125 22:50:13.639604    3927 log.go:172] (0xc000b1b290) Reply frame received for 1\nI0125 22:50:13.639693    3927 log.go:172] (0xc000b1b290) (0xc00065a6e0) Create stream\nI0125 22:50:13.639709    3927 log.go:172] (0xc000b1b290) (0xc00065a6e0) Stream added, broadcasting: 3\nI0125 22:50:13.641089    3927 log.go:172] (0xc000b1b290) Reply frame received for 3\nI0125 22:50:13.641175    3927 log.go:172] (0xc000b1b290) (0xc0004674a0) Create stream\nI0125 22:50:13.641190    3927 log.go:172] (0xc000b1b290) (0xc0004674a0) Stream added, broadcasting: 5\nI0125 22:50:13.643578    3927 log.go:172] (0xc000b1b290) Reply frame received for 5\nI0125 22:50:13.763548    3927 log.go:172] (0xc000b1b290) Data frame received for 5\nI0125 22:50:13.763723    3927 log.go:172] (0xc0004674a0) (5) Data frame handling\nI0125 22:50:13.763760    3927 log.go:172] (0xc0004674a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0125 22:50:13.788671    3927 log.go:172] (0xc000b1b290) Data frame received for 3\nI0125 22:50:13.788761    3927 log.go:172] (0xc00065a6e0) (3) Data frame handling\nI0125 22:50:13.788789    3927 log.go:172] (0xc00065a6e0) (3) Data frame sent\nI0125 22:50:13.949584    3927 log.go:172] (0xc000b1b290) Data frame received for 1\nI0125 22:50:13.949934    3927 log.go:172] (0xc000b1b290) (0xc0004674a0) Stream removed, broadcasting: 5\nI0125 22:50:13.950055    3927 log.go:172] (0xc000972500) (1) Data frame handling\nI0125 22:50:13.950110    3927 log.go:172] (0xc000972500) (1) Data frame sent\nI0125 22:50:13.950220    3927 log.go:172] (0xc000b1b290) (0xc00065a6e0) Stream removed, broadcasting: 3\nI0125 22:50:13.951094    3927 log.go:172] (0xc000b1b290) (0xc000972500) Stream removed, broadcasting: 1\nI0125 22:50:13.951892    3927 log.go:172] (0xc000b1b290) Go away received\nI0125 22:50:13.955508    3927 log.go:172] (0xc000b1b290) (0xc000972500) Stream removed, broadcasting: 1\nI0125 22:50:13.955674    3927 log.go:172] (0xc000b1b290) (0xc00065a6e0) Stream removed, broadcasting: 3\nI0125 22:50:13.955744    3927 log.go:172] (0xc000b1b290) (0xc0004674a0) Stream removed, broadcasting: 5\n"
Jan 25 22:50:13.983: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jan 25 22:50:13.983: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jan 25 22:50:13.983: INFO: Waiting for statefulset status.replicas updated to 0
Jan 25 22:50:13.990: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Jan 25 22:50:24.000: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan 25 22:50:24.000: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Jan 25 22:50:24.000: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Jan 25 22:50:24.013: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.99999977s
Jan 25 22:50:25.021: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.992429896s
Jan 25 22:50:26.030: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.984998083s
Jan 25 22:50:27.040: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.975801018s
Jan 25 22:50:28.052: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.964758308s
Jan 25 22:50:30.064: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.953205765s
Jan 25 22:50:31.091: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.941214009s
Jan 25 22:50:32.099: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.914242381s
Jan 25 22:50:33.131: INFO: Verifying statefulset ss doesn't scale past 3 for another 906.710743ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-7906
Jan 25 22:50:34.182: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7906 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 22:50:34.544: INFO: stderr: "I0125 22:50:34.353919    3950 log.go:172] (0xc000a4c000) (0xc000982000) Create stream\nI0125 22:50:34.354180    3950 log.go:172] (0xc000a4c000) (0xc000982000) Stream added, broadcasting: 1\nI0125 22:50:34.361234    3950 log.go:172] (0xc000a4c000) Reply frame received for 1\nI0125 22:50:34.361316    3950 log.go:172] (0xc000a4c000) (0xc000a14000) Create stream\nI0125 22:50:34.361337    3950 log.go:172] (0xc000a4c000) (0xc000a14000) Stream added, broadcasting: 3\nI0125 22:50:34.362990    3950 log.go:172] (0xc000a4c000) Reply frame received for 3\nI0125 22:50:34.363017    3950 log.go:172] (0xc000a4c000) (0xc000a140a0) Create stream\nI0125 22:50:34.363025    3950 log.go:172] (0xc000a4c000) (0xc000a140a0) Stream added, broadcasting: 5\nI0125 22:50:34.364587    3950 log.go:172] (0xc000a4c000) Reply frame received for 5\nI0125 22:50:34.425861    3950 log.go:172] (0xc000a4c000) Data frame received for 5\nI0125 22:50:34.426103    3950 log.go:172] (0xc000a140a0) (5) Data frame handling\nI0125 22:50:34.426258    3950 log.go:172] (0xc000a140a0) (5) Data frame sent\nI0125 22:50:34.426817    3950 log.go:172] (0xc000a4c000) Data frame received for 3\nI0125 22:50:34.426908    3950 log.go:172] (0xc000a14000) (3) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0125 22:50:34.426979    3950 log.go:172] (0xc000a14000) (3) Data frame sent\nI0125 22:50:34.522646    3950 log.go:172] (0xc000a4c000) Data frame received for 1\nI0125 22:50:34.522853    3950 log.go:172] (0xc000a4c000) (0xc000a140a0) Stream removed, broadcasting: 5\nI0125 22:50:34.522928    3950 log.go:172] (0xc000982000) (1) Data frame handling\nI0125 22:50:34.522954    3950 log.go:172] (0xc000982000) (1) Data frame sent\nI0125 22:50:34.522978    3950 log.go:172] (0xc000a4c000) (0xc000982000) Stream removed, broadcasting: 1\nI0125 22:50:34.523334    3950 log.go:172] (0xc000a4c000) (0xc000a14000) Stream removed, broadcasting: 3\nI0125 22:50:34.523408    3950 log.go:172] (0xc000a4c000) Go away received\nI0125 22:50:34.524630    3950 log.go:172] (0xc000a4c000) (0xc000982000) Stream removed, broadcasting: 1\nI0125 22:50:34.524654    3950 log.go:172] (0xc000a4c000) (0xc000a14000) Stream removed, broadcasting: 3\nI0125 22:50:34.524663    3950 log.go:172] (0xc000a4c000) (0xc000a140a0) Stream removed, broadcasting: 5\n"
Jan 25 22:50:34.545: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jan 25 22:50:34.545: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jan 25 22:50:34.545: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7906 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 22:50:34.954: INFO: stderr: "I0125 22:50:34.721645    3972 log.go:172] (0xc000a85a20) (0xc000960960) Create stream\nI0125 22:50:34.721995    3972 log.go:172] (0xc000a85a20) (0xc000960960) Stream added, broadcasting: 1\nI0125 22:50:34.728535    3972 log.go:172] (0xc000a85a20) Reply frame received for 1\nI0125 22:50:34.728647    3972 log.go:172] (0xc000a85a20) (0xc000673b80) Create stream\nI0125 22:50:34.728656    3972 log.go:172] (0xc000a85a20) (0xc000673b80) Stream added, broadcasting: 3\nI0125 22:50:34.730117    3972 log.go:172] (0xc000a85a20) Reply frame received for 3\nI0125 22:50:34.730155    3972 log.go:172] (0xc000a85a20) (0xc000636780) Create stream\nI0125 22:50:34.730168    3972 log.go:172] (0xc000a85a20) (0xc000636780) Stream added, broadcasting: 5\nI0125 22:50:34.731182    3972 log.go:172] (0xc000a85a20) Reply frame received for 5\nI0125 22:50:34.832662    3972 log.go:172] (0xc000a85a20) Data frame received for 5\nI0125 22:50:34.832762    3972 log.go:172] (0xc000636780) (5) Data frame handling\nI0125 22:50:34.832781    3972 log.go:172] (0xc000636780) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0125 22:50:34.832810    3972 log.go:172] (0xc000a85a20) Data frame received for 3\nI0125 22:50:34.832819    3972 log.go:172] (0xc000673b80) (3) Data frame handling\nI0125 22:50:34.832826    3972 log.go:172] (0xc000673b80) (3) Data frame sent\nI0125 22:50:34.946109    3972 log.go:172] (0xc000a85a20) (0xc000636780) Stream removed, broadcasting: 5\nI0125 22:50:34.946244    3972 log.go:172] (0xc000a85a20) Data frame received for 1\nI0125 22:50:34.946268    3972 log.go:172] (0xc000a85a20) (0xc000673b80) Stream removed, broadcasting: 3\nI0125 22:50:34.946294    3972 log.go:172] (0xc000960960) (1) Data frame handling\nI0125 22:50:34.946313    3972 log.go:172] (0xc000960960) (1) Data frame sent\nI0125 22:50:34.946321    3972 log.go:172] (0xc000a85a20) (0xc000960960) Stream removed, broadcasting: 1\nI0125 22:50:34.946331    3972 log.go:172] (0xc000a85a20) Go away received\nI0125 22:50:34.947318    3972 log.go:172] (0xc000a85a20) (0xc000960960) Stream removed, broadcasting: 1\nI0125 22:50:34.947334    3972 log.go:172] (0xc000a85a20) (0xc000673b80) Stream removed, broadcasting: 3\nI0125 22:50:34.947337    3972 log.go:172] (0xc000a85a20) (0xc000636780) Stream removed, broadcasting: 5\n"
Jan 25 22:50:34.954: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jan 25 22:50:34.955: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jan 25 22:50:34.955: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7906 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 22:50:35.262: INFO: rc: 126
Jan 25 22:50:35.262: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7906 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:
cannot exec in a stopped state: unknown

stderr:
I0125 22:50:35.202034    3991 log.go:172] (0xc000a454a0) (0xc000912780) Create stream
I0125 22:50:35.202456    3991 log.go:172] (0xc000a454a0) (0xc000912780) Stream added, broadcasting: 1
I0125 22:50:35.217479    3991 log.go:172] (0xc000a454a0) Reply frame received for 1
I0125 22:50:35.217768    3991 log.go:172] (0xc000a454a0) (0xc0006c66e0) Create stream
I0125 22:50:35.217793    3991 log.go:172] (0xc000a454a0) (0xc0006c66e0) Stream added, broadcasting: 3
I0125 22:50:35.220375    3991 log.go:172] (0xc000a454a0) Reply frame received for 3
I0125 22:50:35.220598    3991 log.go:172] (0xc000a454a0) (0xc0005114a0) Create stream
I0125 22:50:35.220629    3991 log.go:172] (0xc000a454a0) (0xc0005114a0) Stream added, broadcasting: 5
I0125 22:50:35.222196    3991 log.go:172] (0xc000a454a0) Reply frame received for 5
I0125 22:50:35.250380    3991 log.go:172] (0xc000a454a0) Data frame received for 3
I0125 22:50:35.250411    3991 log.go:172] (0xc0006c66e0) (3) Data frame handling
I0125 22:50:35.250438    3991 log.go:172] (0xc0006c66e0) (3) Data frame sent
I0125 22:50:35.252338    3991 log.go:172] (0xc000a454a0) Data frame received for 1
I0125 22:50:35.252433    3991 log.go:172] (0xc000a454a0) (0xc0006c66e0) Stream removed, broadcasting: 3
I0125 22:50:35.252484    3991 log.go:172] (0xc000912780) (1) Data frame handling
I0125 22:50:35.252504    3991 log.go:172] (0xc000912780) (1) Data frame sent
I0125 22:50:35.252545    3991 log.go:172] (0xc000a454a0) (0xc0005114a0) Stream removed, broadcasting: 5
I0125 22:50:35.252660    3991 log.go:172] (0xc000a454a0) (0xc000912780) Stream removed, broadcasting: 1
I0125 22:50:35.252719    3991 log.go:172] (0xc000a454a0) Go away received
I0125 22:50:35.253595    3991 log.go:172] (0xc000a454a0) (0xc000912780) Stream removed, broadcasting: 1
I0125 22:50:35.253610    3991 log.go:172] (0xc000a454a0) (0xc0006c66e0) Stream removed, broadcasting: 3
I0125 22:50:35.253617    3991 log.go:172] (0xc000a454a0) (0xc0005114a0) Stream removed, broadcasting: 5
command terminated with exit code 126

error:
exit status 126
Jan 25 22:50:45.263: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7906 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 22:50:45.463: INFO: rc: 1
Jan 25 22:50:45.463: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7906 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 25 22:50:55.464: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7906 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 22:50:55.736: INFO: rc: 1
Jan 25 22:50:55.737: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7906 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 25 22:51:05.737: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7906 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 22:51:05.928: INFO: rc: 1
Jan 25 22:51:05.929: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7906 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 25 22:51:15.929: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7906 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 22:51:16.140: INFO: rc: 1
Jan 25 22:51:16.140: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7906 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 25 22:51:26.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7906 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 22:51:26.299: INFO: rc: 1
Jan 25 22:51:26.299: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7906 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 25 22:51:36.300: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7906 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 22:51:36.515: INFO: rc: 1
Jan 25 22:51:36.515: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7906 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 25 22:51:46.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7906 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 22:51:46.690: INFO: rc: 1
Jan 25 22:51:46.691: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7906 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 25 22:51:56.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7906 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 22:51:56.887: INFO: rc: 1
Jan 25 22:51:56.888: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7906 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 25 22:52:06.888: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7906 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 22:52:07.065: INFO: rc: 1
Jan 25 22:52:07.065: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7906 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 25 22:52:17.067: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7906 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 22:52:17.194: INFO: rc: 1
Jan 25 22:52:17.194: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7906 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 25 22:52:27.194: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7906 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 22:52:27.373: INFO: rc: 1
Jan 25 22:52:27.373: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7906 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 25 22:52:37.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7906 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 22:52:37.559: INFO: rc: 1
Jan 25 22:52:37.559: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7906 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 25 22:52:47.560: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7906 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 22:52:47.752: INFO: rc: 1
Jan 25 22:52:47.752: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7906 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 25 22:52:57.753: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7906 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 22:52:57.978: INFO: rc: 1
Jan 25 22:52:57.978: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7906 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 25 22:53:07.979: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7906 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 22:53:08.130: INFO: rc: 1
Jan 25 22:53:08.130: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7906 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 25 22:53:18.131: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7906 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 22:53:18.278: INFO: rc: 1
Jan 25 22:53:18.278: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7906 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 25 22:53:28.279: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7906 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 22:53:28.500: INFO: rc: 1
Jan 25 22:53:28.501: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7906 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 25 22:53:38.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7906 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 22:53:38.681: INFO: rc: 1
Jan 25 22:53:38.681: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7906 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 25 22:53:48.684: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7906 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 22:53:48.835: INFO: rc: 1
Jan 25 22:53:48.835: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7906 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 25 22:53:58.836: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7906 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 22:53:58.985: INFO: rc: 1
Jan 25 22:53:58.985: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7906 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 25 22:54:08.986: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7906 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 22:54:09.210: INFO: rc: 1
Jan 25 22:54:09.210: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7906 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 25 22:54:19.211: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7906 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 22:54:19.397: INFO: rc: 1
Jan 25 22:54:19.398: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7906 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 25 22:54:29.399: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7906 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 22:54:29.545: INFO: rc: 1
Jan 25 22:54:29.545: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7906 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 25 22:54:39.546: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7906 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 22:54:39.765: INFO: rc: 1
Jan 25 22:54:39.766: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7906 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 25 22:54:49.767: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7906 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 22:54:49.967: INFO: rc: 1
Jan 25 22:54:49.967: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7906 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 25 22:54:59.968: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7906 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 22:55:00.115: INFO: rc: 1
Jan 25 22:55:00.116: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7906 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 25 22:55:10.116: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7906 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 22:55:10.353: INFO: rc: 1
Jan 25 22:55:10.353: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7906 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 25 22:55:20.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7906 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 22:55:20.646: INFO: rc: 1
Jan 25 22:55:20.646: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7906 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 25 22:55:30.647: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7906 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 22:55:30.815: INFO: rc: 1
Jan 25 22:55:30.815: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7906 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 25 22:55:40.816: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7906 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 22:55:40.973: INFO: rc: 1
Jan 25 22:55:40.974: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: 
Jan 25 22:55:40.974: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Jan 25 22:55:40.993: INFO: Deleting all statefulset in ns statefulset-7906
Jan 25 22:55:40.996: INFO: Scaling statefulset ss to 0
Jan 25 22:55:41.009: INFO: Waiting for statefulset status.replicas updated to 0
Jan 25 22:55:41.012: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:55:41.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-7906" for this suite.

• [SLOW TEST:389.701 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":278,"completed":236,"skipped":3947,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:55:41.098: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-62fbc8a7-b946-4049-b963-152e21c336bb
STEP: Creating a pod to test consume configMaps
Jan 25 22:55:41.176: INFO: Waiting up to 5m0s for pod "pod-configmaps-099f7155-d036-4621-ac2c-2e2fd8be7fea" in namespace "configmap-4218" to be "success or failure"
Jan 25 22:55:41.184: INFO: Pod "pod-configmaps-099f7155-d036-4621-ac2c-2e2fd8be7fea": Phase="Pending", Reason="", readiness=false. Elapsed: 7.168608ms
Jan 25 22:55:43.192: INFO: Pod "pod-configmaps-099f7155-d036-4621-ac2c-2e2fd8be7fea": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015919949s
Jan 25 22:55:45.335: INFO: Pod "pod-configmaps-099f7155-d036-4621-ac2c-2e2fd8be7fea": Phase="Pending", Reason="", readiness=false. Elapsed: 4.158666968s
Jan 25 22:55:47.359: INFO: Pod "pod-configmaps-099f7155-d036-4621-ac2c-2e2fd8be7fea": Phase="Pending", Reason="", readiness=false. Elapsed: 6.182731592s
Jan 25 22:55:49.411: INFO: Pod "pod-configmaps-099f7155-d036-4621-ac2c-2e2fd8be7fea": Phase="Pending", Reason="", readiness=false. Elapsed: 8.234242675s
Jan 25 22:55:51.418: INFO: Pod "pod-configmaps-099f7155-d036-4621-ac2c-2e2fd8be7fea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.241864371s
STEP: Saw pod success
Jan 25 22:55:51.418: INFO: Pod "pod-configmaps-099f7155-d036-4621-ac2c-2e2fd8be7fea" satisfied condition "success or failure"
Jan 25 22:55:51.422: INFO: Trying to get logs from node jerma-node pod pod-configmaps-099f7155-d036-4621-ac2c-2e2fd8be7fea container configmap-volume-test: 
STEP: delete the pod
Jan 25 22:55:51.616: INFO: Waiting for pod pod-configmaps-099f7155-d036-4621-ac2c-2e2fd8be7fea to disappear
Jan 25 22:55:51.635: INFO: Pod pod-configmaps-099f7155-d036-4621-ac2c-2e2fd8be7fea no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:55:51.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4218" for this suite.

• [SLOW TEST:10.563 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":237,"skipped":3957,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:55:51.662: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-721.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-721.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 25 22:56:04.019: INFO: DNS probes using dns-721/dns-test-cef8a31e-ce4c-47b0-974e-98f45f8a667b succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:56:04.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-721" for this suite.

• [SLOW TEST:12.540 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster  [Conformance]","total":278,"completed":238,"skipped":3988,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
S
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:56:04.204: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jan 25 22:56:04.386: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d3be956a-1851-46b3-bb51-e3e21ce12ef8" in namespace "projected-472" to be "success or failure"
Jan 25 22:56:04.396: INFO: Pod "downwardapi-volume-d3be956a-1851-46b3-bb51-e3e21ce12ef8": Phase="Pending", Reason="", readiness=false. Elapsed: 9.369037ms
Jan 25 22:56:06.405: INFO: Pod "downwardapi-volume-d3be956a-1851-46b3-bb51-e3e21ce12ef8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018602566s
Jan 25 22:56:08.413: INFO: Pod "downwardapi-volume-d3be956a-1851-46b3-bb51-e3e21ce12ef8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026460174s
Jan 25 22:56:10.426: INFO: Pod "downwardapi-volume-d3be956a-1851-46b3-bb51-e3e21ce12ef8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039340317s
Jan 25 22:56:12.433: INFO: Pod "downwardapi-volume-d3be956a-1851-46b3-bb51-e3e21ce12ef8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.046587754s
Jan 25 22:56:14.440: INFO: Pod "downwardapi-volume-d3be956a-1851-46b3-bb51-e3e21ce12ef8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.054287653s
STEP: Saw pod success
Jan 25 22:56:14.441: INFO: Pod "downwardapi-volume-d3be956a-1851-46b3-bb51-e3e21ce12ef8" satisfied condition "success or failure"
Jan 25 22:56:14.447: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-d3be956a-1851-46b3-bb51-e3e21ce12ef8 container client-container: 
STEP: delete the pod
Jan 25 22:56:14.520: INFO: Waiting for pod downwardapi-volume-d3be956a-1851-46b3-bb51-e3e21ce12ef8 to disappear
Jan 25 22:56:14.524: INFO: Pod downwardapi-volume-d3be956a-1851-46b3-bb51-e3e21ce12ef8 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:56:14.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-472" for this suite.

• [SLOW TEST:10.334 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":239,"skipped":3989,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:56:14.539: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jan 25 22:56:14.766: INFO: Waiting up to 5m0s for pod "pod-3541eaf5-a5fc-4595-9c6e-2123d003eede" in namespace "emptydir-905" to be "success or failure"
Jan 25 22:56:14.818: INFO: Pod "pod-3541eaf5-a5fc-4595-9c6e-2123d003eede": Phase="Pending", Reason="", readiness=false. Elapsed: 52.176967ms
Jan 25 22:56:16.830: INFO: Pod "pod-3541eaf5-a5fc-4595-9c6e-2123d003eede": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063855002s
Jan 25 22:56:18.837: INFO: Pod "pod-3541eaf5-a5fc-4595-9c6e-2123d003eede": Phase="Pending", Reason="", readiness=false. Elapsed: 4.070576806s
Jan 25 22:56:20.852: INFO: Pod "pod-3541eaf5-a5fc-4595-9c6e-2123d003eede": Phase="Pending", Reason="", readiness=false. Elapsed: 6.085821755s
Jan 25 22:56:22.865: INFO: Pod "pod-3541eaf5-a5fc-4595-9c6e-2123d003eede": Phase="Pending", Reason="", readiness=false. Elapsed: 8.098461857s
Jan 25 22:56:24.871: INFO: Pod "pod-3541eaf5-a5fc-4595-9c6e-2123d003eede": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.105030501s
STEP: Saw pod success
Jan 25 22:56:24.871: INFO: Pod "pod-3541eaf5-a5fc-4595-9c6e-2123d003eede" satisfied condition "success or failure"
Jan 25 22:56:24.876: INFO: Trying to get logs from node jerma-node pod pod-3541eaf5-a5fc-4595-9c6e-2123d003eede container test-container: 
STEP: delete the pod
Jan 25 22:56:24.923: INFO: Waiting for pod pod-3541eaf5-a5fc-4595-9c6e-2123d003eede to disappear
Jan 25 22:56:24.930: INFO: Pod pod-3541eaf5-a5fc-4595-9c6e-2123d003eede no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:56:24.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-905" for this suite.

• [SLOW TEST:10.403 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":240,"skipped":3989,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:56:24.943: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Jan 25 22:56:41.285: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 25 22:56:41.297: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 25 22:56:43.298: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 25 22:56:43.306: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 25 22:56:45.298: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 25 22:56:45.307: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 25 22:56:47.298: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 25 22:56:47.305: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:56:47.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-995" for this suite.

• [SLOW TEST:22.378 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":278,"completed":241,"skipped":4001,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:56:47.323: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test substitution in container's command
Jan 25 22:56:47.462: INFO: Waiting up to 5m0s for pod "var-expansion-9adc240d-1069-43c5-b5c7-565b0bb72dce" in namespace "var-expansion-4427" to be "success or failure"
Jan 25 22:56:47.488: INFO: Pod "var-expansion-9adc240d-1069-43c5-b5c7-565b0bb72dce": Phase="Pending", Reason="", readiness=false. Elapsed: 26.080655ms
Jan 25 22:56:49.504: INFO: Pod "var-expansion-9adc240d-1069-43c5-b5c7-565b0bb72dce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042290452s
Jan 25 22:56:51.513: INFO: Pod "var-expansion-9adc240d-1069-43c5-b5c7-565b0bb72dce": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050548558s
Jan 25 22:56:53.520: INFO: Pod "var-expansion-9adc240d-1069-43c5-b5c7-565b0bb72dce": Phase="Pending", Reason="", readiness=false. Elapsed: 6.058262783s
Jan 25 22:56:55.527: INFO: Pod "var-expansion-9adc240d-1069-43c5-b5c7-565b0bb72dce": Phase="Pending", Reason="", readiness=false. Elapsed: 8.06529912s
Jan 25 22:56:57.536: INFO: Pod "var-expansion-9adc240d-1069-43c5-b5c7-565b0bb72dce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.07374782s
STEP: Saw pod success
Jan 25 22:56:57.536: INFO: Pod "var-expansion-9adc240d-1069-43c5-b5c7-565b0bb72dce" satisfied condition "success or failure"
Jan 25 22:56:57.541: INFO: Trying to get logs from node jerma-node pod var-expansion-9adc240d-1069-43c5-b5c7-565b0bb72dce container dapi-container: 
STEP: delete the pod
Jan 25 22:56:58.314: INFO: Waiting for pod var-expansion-9adc240d-1069-43c5-b5c7-565b0bb72dce to disappear
Jan 25 22:56:58.330: INFO: Pod var-expansion-9adc240d-1069-43c5-b5c7-565b0bb72dce no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:56:58.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-4427" for this suite.

• [SLOW TEST:11.035 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":278,"completed":242,"skipped":4039,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
S
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:56:58.359: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Jan 25 22:57:14.688: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 25 22:57:14.711: INFO: Pod pod-with-prestop-http-hook still exists
Jan 25 22:57:16.712: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 25 22:57:16.732: INFO: Pod pod-with-prestop-http-hook still exists
Jan 25 22:57:18.712: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 25 22:57:18.746: INFO: Pod pod-with-prestop-http-hook still exists
Jan 25 22:57:20.712: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 25 22:57:20.719: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:57:20.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-7651" for this suite.

• [SLOW TEST:22.395 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":278,"completed":243,"skipped":4040,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:57:20.755: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:57:20.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-4525" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":278,"completed":244,"skipped":4052,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:57:21.008: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating service endpoint-test2 in namespace services-1798
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1798 to expose endpoints map[]
Jan 25 22:57:21.602: INFO: successfully validated that service endpoint-test2 in namespace services-1798 exposes endpoints map[] (162.381941ms elapsed)
STEP: Creating pod pod1 in namespace services-1798
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1798 to expose endpoints map[pod1:[80]]
Jan 25 22:57:25.716: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.093263515s elapsed, will retry)
Jan 25 22:57:30.886: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (9.263043971s elapsed, will retry)
Jan 25 22:57:31.912: INFO: successfully validated that service endpoint-test2 in namespace services-1798 exposes endpoints map[pod1:[80]] (10.289239316s elapsed)
STEP: Creating pod pod2 in namespace services-1798
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1798 to expose endpoints map[pod1:[80] pod2:[80]]
Jan 25 22:57:37.186: INFO: Unexpected endpoints: found map[a43b723e-a705-46ab-b654-ee25c529ef4f:[80]], expected map[pod1:[80] pod2:[80]] (5.266944955s elapsed, will retry)
Jan 25 22:57:39.241: INFO: successfully validated that service endpoint-test2 in namespace services-1798 exposes endpoints map[pod1:[80] pod2:[80]] (7.321794153s elapsed)
STEP: Deleting pod pod1 in namespace services-1798
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1798 to expose endpoints map[pod2:[80]]
Jan 25 22:57:39.289: INFO: successfully validated that service endpoint-test2 in namespace services-1798 exposes endpoints map[pod2:[80]] (32.637112ms elapsed)
STEP: Deleting pod pod2 in namespace services-1798
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1798 to expose endpoints map[]
Jan 25 22:57:39.417: INFO: successfully validated that service endpoint-test2 in namespace services-1798 exposes endpoints map[] (111.690509ms elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:57:39.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-1798" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:18.462 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods  [Conformance]","total":278,"completed":245,"skipped":4064,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:57:39.473: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-35ec07c2-7659-47a2-b640-c6d7a89233de
STEP: Creating a pod to test consume secrets
Jan 25 22:57:39.691: INFO: Waiting up to 5m0s for pod "pod-secrets-7e27390a-aeb0-4e10-b528-ab86f82436aa" in namespace "secrets-5076" to be "success or failure"
Jan 25 22:57:39.794: INFO: Pod "pod-secrets-7e27390a-aeb0-4e10-b528-ab86f82436aa": Phase="Pending", Reason="", readiness=false. Elapsed: 102.329856ms
Jan 25 22:57:41.805: INFO: Pod "pod-secrets-7e27390a-aeb0-4e10-b528-ab86f82436aa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.113190554s
Jan 25 22:57:45.260: INFO: Pod "pod-secrets-7e27390a-aeb0-4e10-b528-ab86f82436aa": Phase="Pending", Reason="", readiness=false. Elapsed: 5.568690431s
Jan 25 22:57:47.268: INFO: Pod "pod-secrets-7e27390a-aeb0-4e10-b528-ab86f82436aa": Phase="Pending", Reason="", readiness=false. Elapsed: 7.576723169s
Jan 25 22:57:49.276: INFO: Pod "pod-secrets-7e27390a-aeb0-4e10-b528-ab86f82436aa": Phase="Pending", Reason="", readiness=false. Elapsed: 9.584726081s
Jan 25 22:57:51.284: INFO: Pod "pod-secrets-7e27390a-aeb0-4e10-b528-ab86f82436aa": Phase="Pending", Reason="", readiness=false. Elapsed: 11.592201463s
Jan 25 22:57:53.291: INFO: Pod "pod-secrets-7e27390a-aeb0-4e10-b528-ab86f82436aa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.599427153s
STEP: Saw pod success
Jan 25 22:57:53.291: INFO: Pod "pod-secrets-7e27390a-aeb0-4e10-b528-ab86f82436aa" satisfied condition "success or failure"
Jan 25 22:57:53.295: INFO: Trying to get logs from node jerma-node pod pod-secrets-7e27390a-aeb0-4e10-b528-ab86f82436aa container secret-volume-test: 
STEP: delete the pod
Jan 25 22:57:53.454: INFO: Waiting for pod pod-secrets-7e27390a-aeb0-4e10-b528-ab86f82436aa to disappear
Jan 25 22:57:53.472: INFO: Pod pod-secrets-7e27390a-aeb0-4e10-b528-ab86f82436aa no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:57:53.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5076" for this suite.

• [SLOW TEST:14.018 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":246,"skipped":4114,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:57:53.491: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:58:53.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-442" for this suite.

• [SLOW TEST:60.238 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":278,"completed":247,"skipped":4117,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:58:53.731: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-rnbg6 in namespace proxy-5821
I0125 22:58:53.947131       8 runners.go:189] Created replication controller with name: proxy-service-rnbg6, namespace: proxy-5821, replica count: 1
I0125 22:58:54.999280       8 runners.go:189] proxy-service-rnbg6 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0125 22:58:55.999807       8 runners.go:189] proxy-service-rnbg6 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0125 22:58:57.000466       8 runners.go:189] proxy-service-rnbg6 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0125 22:58:58.001134       8 runners.go:189] proxy-service-rnbg6 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0125 22:58:59.001772       8 runners.go:189] proxy-service-rnbg6 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0125 22:59:00.002773       8 runners.go:189] proxy-service-rnbg6 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0125 22:59:01.004082       8 runners.go:189] proxy-service-rnbg6 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0125 22:59:02.004890       8 runners.go:189] proxy-service-rnbg6 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0125 22:59:03.005348       8 runners.go:189] proxy-service-rnbg6 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0125 22:59:04.005956       8 runners.go:189] proxy-service-rnbg6 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0125 22:59:05.006415       8 runners.go:189] proxy-service-rnbg6 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0125 22:59:06.007470       8 runners.go:189] proxy-service-rnbg6 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0125 22:59:07.008065       8 runners.go:189] proxy-service-rnbg6 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0125 22:59:08.008619       8 runners.go:189] proxy-service-rnbg6 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jan 25 22:59:08.014: INFO: setup took 14.172680574s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Jan 25 22:59:08.039: INFO: (0) /api/v1/namespaces/proxy-5821/pods/proxy-service-rnbg6-fs5xx:160/proxy/: foo (200; 24.512496ms)
Jan 25 22:59:08.039: INFO: (0) /api/v1/namespaces/proxy-5821/services/http:proxy-service-rnbg6:portname1/proxy/: foo (200; 24.841004ms)
Jan 25 22:59:08.039: INFO: (0) /api/v1/namespaces/proxy-5821/pods/http:proxy-service-rnbg6-fs5xx:162/proxy/: bar (200; 25.003682ms)
Jan 25 22:59:08.039: INFO: (0) /api/v1/namespaces/proxy-5821/services/proxy-service-rnbg6:portname1/proxy/: foo (200; 24.859982ms)
Jan 25 22:59:08.039: INFO: (0) /api/v1/namespaces/proxy-5821/pods/proxy-service-rnbg6-fs5xx/proxy/: test (200; 25.014975ms)
Jan 25 22:59:08.039: INFO: (0) /api/v1/namespaces/proxy-5821/services/proxy-service-rnbg6:portname2/proxy/: bar (200; 25.198743ms)
Jan 25 22:59:08.039: INFO: (0) /api/v1/namespaces/proxy-5821/pods/http:proxy-service-rnbg6-fs5xx:1080/proxy/: ... (200; 25.618233ms)
Jan 25 22:59:08.044: INFO: (0) /api/v1/namespaces/proxy-5821/pods/proxy-service-rnbg6-fs5xx:162/proxy/: bar (200; 29.760624ms)
Jan 25 22:59:08.044: INFO: (0) /api/v1/namespaces/proxy-5821/pods/proxy-service-rnbg6-fs5xx:1080/proxy/: test<... (200; 29.676602ms)
Jan 25 22:59:08.044: INFO: (0) /api/v1/namespaces/proxy-5821/services/http:proxy-service-rnbg6:portname2/proxy/: bar (200; 30.192321ms)
Jan 25 22:59:08.044: INFO: (0) /api/v1/namespaces/proxy-5821/pods/http:proxy-service-rnbg6-fs5xx:160/proxy/: foo (200; 29.941545ms)
Jan 25 22:59:08.047: INFO: (0) /api/v1/namespaces/proxy-5821/pods/https:proxy-service-rnbg6-fs5xx:443/proxy/: test<... (200; 17.856987ms)
Jan 25 22:59:08.074: INFO: (1) /api/v1/namespaces/proxy-5821/services/proxy-service-rnbg6:portname2/proxy/: bar (200; 18.194054ms)
Jan 25 22:59:08.074: INFO: (1) /api/v1/namespaces/proxy-5821/services/https:proxy-service-rnbg6:tlsportname1/proxy/: tls baz (200; 18.291218ms)
Jan 25 22:59:08.074: INFO: (1) /api/v1/namespaces/proxy-5821/pods/proxy-service-rnbg6-fs5xx/proxy/: test (200; 18.444969ms)
Jan 25 22:59:08.075: INFO: (1) /api/v1/namespaces/proxy-5821/services/http:proxy-service-rnbg6:portname2/proxy/: bar (200; 19.340128ms)
Jan 25 22:59:08.075: INFO: (1) /api/v1/namespaces/proxy-5821/pods/https:proxy-service-rnbg6-fs5xx:460/proxy/: tls baz (200; 19.46902ms)
Jan 25 22:59:08.075: INFO: (1) /api/v1/namespaces/proxy-5821/services/http:proxy-service-rnbg6:portname1/proxy/: foo (200; 20.089203ms)
Jan 25 22:59:08.075: INFO: (1) /api/v1/namespaces/proxy-5821/services/proxy-service-rnbg6:portname1/proxy/: foo (200; 19.755637ms)
Jan 25 22:59:08.076: INFO: (1) /api/v1/namespaces/proxy-5821/pods/http:proxy-service-rnbg6-fs5xx:1080/proxy/: ... (200; 19.986575ms)
Jan 25 22:59:08.076: INFO: (1) /api/v1/namespaces/proxy-5821/pods/http:proxy-service-rnbg6-fs5xx:162/proxy/: bar (200; 20.679038ms)
Jan 25 22:59:08.076: INFO: (1) /api/v1/namespaces/proxy-5821/pods/proxy-service-rnbg6-fs5xx:162/proxy/: bar (200; 20.57519ms)
Jan 25 22:59:08.076: INFO: (1) /api/v1/namespaces/proxy-5821/pods/proxy-service-rnbg6-fs5xx:160/proxy/: foo (200; 20.243161ms)
Jan 25 22:59:08.077: INFO: (1) /api/v1/namespaces/proxy-5821/pods/http:proxy-service-rnbg6-fs5xx:160/proxy/: foo (200; 21.297084ms)
Jan 25 22:59:08.077: INFO: (1) /api/v1/namespaces/proxy-5821/pods/https:proxy-service-rnbg6-fs5xx:462/proxy/: tls qux (200; 21.553915ms)
Jan 25 22:59:08.078: INFO: (1) /api/v1/namespaces/proxy-5821/services/https:proxy-service-rnbg6:tlsportname2/proxy/: tls qux (200; 22.669477ms)
Jan 25 22:59:08.091: INFO: (2) /api/v1/namespaces/proxy-5821/pods/proxy-service-rnbg6-fs5xx:160/proxy/: foo (200; 13.118814ms)
Jan 25 22:59:08.095: INFO: (2) /api/v1/namespaces/proxy-5821/pods/https:proxy-service-rnbg6-fs5xx:462/proxy/: tls qux (200; 16.467539ms)
Jan 25 22:59:08.095: INFO: (2) /api/v1/namespaces/proxy-5821/services/proxy-service-rnbg6:portname2/proxy/: bar (200; 16.498621ms)
Jan 25 22:59:08.095: INFO: (2) /api/v1/namespaces/proxy-5821/pods/proxy-service-rnbg6-fs5xx/proxy/: test (200; 16.498931ms)
Jan 25 22:59:08.095: INFO: (2) /api/v1/namespaces/proxy-5821/services/proxy-service-rnbg6:portname1/proxy/: foo (200; 16.592283ms)
Jan 25 22:59:08.095: INFO: (2) /api/v1/namespaces/proxy-5821/pods/http:proxy-service-rnbg6-fs5xx:1080/proxy/: ... (200; 17.113089ms)
Jan 25 22:59:08.095: INFO: (2) /api/v1/namespaces/proxy-5821/services/http:proxy-service-rnbg6:portname1/proxy/: foo (200; 17.143738ms)
Jan 25 22:59:08.095: INFO: (2) /api/v1/namespaces/proxy-5821/pods/http:proxy-service-rnbg6-fs5xx:162/proxy/: bar (200; 17.012553ms)
Jan 25 22:59:08.095: INFO: (2) /api/v1/namespaces/proxy-5821/pods/https:proxy-service-rnbg6-fs5xx:460/proxy/: tls baz (200; 17.310698ms)
Jan 25 22:59:08.096: INFO: (2) /api/v1/namespaces/proxy-5821/pods/https:proxy-service-rnbg6-fs5xx:443/proxy/: test<... (200; 20.977312ms)
Jan 25 22:59:08.101: INFO: (2) /api/v1/namespaces/proxy-5821/services/https:proxy-service-rnbg6:tlsportname1/proxy/: tls baz (200; 22.695021ms)
Jan 25 22:59:08.107: INFO: (3) /api/v1/namespaces/proxy-5821/pods/http:proxy-service-rnbg6-fs5xx:1080/proxy/: ... (200; 5.622466ms)
Jan 25 22:59:08.116: INFO: (3) /api/v1/namespaces/proxy-5821/pods/proxy-service-rnbg6-fs5xx/proxy/: test (200; 14.958466ms)
Jan 25 22:59:08.116: INFO: (3) /api/v1/namespaces/proxy-5821/pods/https:proxy-service-rnbg6-fs5xx:460/proxy/: tls baz (200; 15.034573ms)
Jan 25 22:59:08.116: INFO: (3) /api/v1/namespaces/proxy-5821/pods/https:proxy-service-rnbg6-fs5xx:462/proxy/: tls qux (200; 15.252903ms)
Jan 25 22:59:08.117: INFO: (3) /api/v1/namespaces/proxy-5821/pods/http:proxy-service-rnbg6-fs5xx:160/proxy/: foo (200; 16.151035ms)
Jan 25 22:59:08.118: INFO: (3) /api/v1/namespaces/proxy-5821/pods/proxy-service-rnbg6-fs5xx:1080/proxy/: test<... (200; 16.474641ms)
Jan 25 22:59:08.118: INFO: (3) /api/v1/namespaces/proxy-5821/pods/proxy-service-rnbg6-fs5xx:160/proxy/: foo (200; 16.735303ms)
Jan 25 22:59:08.118: INFO: (3) /api/v1/namespaces/proxy-5821/pods/https:proxy-service-rnbg6-fs5xx:443/proxy/: test (200; 14.247335ms)
Jan 25 22:59:08.146: INFO: (4) /api/v1/namespaces/proxy-5821/pods/http:proxy-service-rnbg6-fs5xx:162/proxy/: bar (200; 14.473663ms)
Jan 25 22:59:08.146: INFO: (4) /api/v1/namespaces/proxy-5821/pods/proxy-service-rnbg6-fs5xx:1080/proxy/: test<... (200; 14.710069ms)
Jan 25 22:59:08.148: INFO: (4) /api/v1/namespaces/proxy-5821/services/http:proxy-service-rnbg6:portname1/proxy/: foo (200; 16.244785ms)
Jan 25 22:59:08.148: INFO: (4) /api/v1/namespaces/proxy-5821/pods/http:proxy-service-rnbg6-fs5xx:1080/proxy/: ... (200; 16.576803ms)
Jan 25 22:59:08.149: INFO: (4) /api/v1/namespaces/proxy-5821/services/https:proxy-service-rnbg6:tlsportname2/proxy/: tls qux (200; 17.184394ms)
Jan 25 22:59:08.150: INFO: (4) /api/v1/namespaces/proxy-5821/pods/proxy-service-rnbg6-fs5xx:160/proxy/: foo (200; 18.058796ms)
Jan 25 22:59:08.150: INFO: (4) /api/v1/namespaces/proxy-5821/services/http:proxy-service-rnbg6:portname2/proxy/: bar (200; 18.020587ms)
Jan 25 22:59:08.150: INFO: (4) /api/v1/namespaces/proxy-5821/services/https:proxy-service-rnbg6:tlsportname1/proxy/: tls baz (200; 18.078973ms)
Jan 25 22:59:08.150: INFO: (4) /api/v1/namespaces/proxy-5821/services/proxy-service-rnbg6:portname2/proxy/: bar (200; 18.108079ms)
Jan 25 22:59:08.150: INFO: (4) /api/v1/namespaces/proxy-5821/pods/https:proxy-service-rnbg6-fs5xx:462/proxy/: tls qux (200; 18.551776ms)
Jan 25 22:59:08.151: INFO: (4) /api/v1/namespaces/proxy-5821/services/proxy-service-rnbg6:portname1/proxy/: foo (200; 20.120745ms)
Jan 25 22:59:08.162: INFO: (5) /api/v1/namespaces/proxy-5821/pods/http:proxy-service-rnbg6-fs5xx:1080/proxy/: ... (200; 10.241322ms)
Jan 25 22:59:08.162: INFO: (5) /api/v1/namespaces/proxy-5821/services/https:proxy-service-rnbg6:tlsportname2/proxy/: tls qux (200; 10.369819ms)
Jan 25 22:59:08.162: INFO: (5) /api/v1/namespaces/proxy-5821/services/https:proxy-service-rnbg6:tlsportname1/proxy/: tls baz (200; 10.651849ms)
Jan 25 22:59:08.165: INFO: (5) /api/v1/namespaces/proxy-5821/pods/proxy-service-rnbg6-fs5xx/proxy/: test (200; 13.536536ms)
Jan 25 22:59:08.166: INFO: (5) /api/v1/namespaces/proxy-5821/services/http:proxy-service-rnbg6:portname2/proxy/: bar (200; 13.731152ms)
Jan 25 22:59:08.166: INFO: (5) /api/v1/namespaces/proxy-5821/pods/proxy-service-rnbg6-fs5xx:1080/proxy/: test<... (200; 13.859723ms)
Jan 25 22:59:08.166: INFO: (5) /api/v1/namespaces/proxy-5821/pods/https:proxy-service-rnbg6-fs5xx:460/proxy/: tls baz (200; 14.087636ms)
Jan 25 22:59:08.166: INFO: (5) /api/v1/namespaces/proxy-5821/pods/http:proxy-service-rnbg6-fs5xx:160/proxy/: foo (200; 14.223716ms)
Jan 25 22:59:08.166: INFO: (5) /api/v1/namespaces/proxy-5821/pods/http:proxy-service-rnbg6-fs5xx:162/proxy/: bar (200; 14.132603ms)
Jan 25 22:59:08.166: INFO: (5) /api/v1/namespaces/proxy-5821/pods/proxy-service-rnbg6-fs5xx:160/proxy/: foo (200; 14.490532ms)
Jan 25 22:59:08.167: INFO: (5) /api/v1/namespaces/proxy-5821/services/http:proxy-service-rnbg6:portname1/proxy/: foo (200; 14.735416ms)
Jan 25 22:59:08.167: INFO: (5) /api/v1/namespaces/proxy-5821/services/proxy-service-rnbg6:portname2/proxy/: bar (200; 15.137063ms)
Jan 25 22:59:08.167: INFO: (5) /api/v1/namespaces/proxy-5821/pods/https:proxy-service-rnbg6-fs5xx:443/proxy/: test (200; 11.262853ms)
Jan 25 22:59:08.179: INFO: (6) /api/v1/namespaces/proxy-5821/pods/http:proxy-service-rnbg6-fs5xx:1080/proxy/: ... (200; 11.384461ms)
Jan 25 22:59:08.179: INFO: (6) /api/v1/namespaces/proxy-5821/services/proxy-service-rnbg6:portname2/proxy/: bar (200; 11.621208ms)
Jan 25 22:59:08.179: INFO: (6) /api/v1/namespaces/proxy-5821/services/http:proxy-service-rnbg6:portname2/proxy/: bar (200; 11.496632ms)
Jan 25 22:59:08.179: INFO: (6) /api/v1/namespaces/proxy-5821/pods/proxy-service-rnbg6-fs5xx:1080/proxy/: test<... (200; 11.605139ms)
Jan 25 22:59:08.180: INFO: (6) /api/v1/namespaces/proxy-5821/services/proxy-service-rnbg6:portname1/proxy/: foo (200; 11.664401ms)
Jan 25 22:59:08.180: INFO: (6) /api/v1/namespaces/proxy-5821/pods/https:proxy-service-rnbg6-fs5xx:460/proxy/: tls baz (200; 11.799011ms)
Jan 25 22:59:08.180: INFO: (6) /api/v1/namespaces/proxy-5821/services/https:proxy-service-rnbg6:tlsportname1/proxy/: tls baz (200; 12.070289ms)
Jan 25 22:59:08.193: INFO: (7) /api/v1/namespaces/proxy-5821/services/https:proxy-service-rnbg6:tlsportname2/proxy/: tls qux (200; 12.637833ms)
Jan 25 22:59:08.194: INFO: (7) /api/v1/namespaces/proxy-5821/services/https:proxy-service-rnbg6:tlsportname1/proxy/: tls baz (200; 13.838458ms)
Jan 25 22:59:08.194: INFO: (7) /api/v1/namespaces/proxy-5821/services/proxy-service-rnbg6:portname2/proxy/: bar (200; 13.970778ms)
Jan 25 22:59:08.194: INFO: (7) /api/v1/namespaces/proxy-5821/services/http:proxy-service-rnbg6:portname2/proxy/: bar (200; 14.191789ms)
Jan 25 22:59:08.195: INFO: (7) /api/v1/namespaces/proxy-5821/pods/http:proxy-service-rnbg6-fs5xx:1080/proxy/: ... (200; 14.739146ms)
Jan 25 22:59:08.195: INFO: (7) /api/v1/namespaces/proxy-5821/services/proxy-service-rnbg6:portname1/proxy/: foo (200; 14.952996ms)
Jan 25 22:59:08.195: INFO: (7) /api/v1/namespaces/proxy-5821/pods/proxy-service-rnbg6-fs5xx:1080/proxy/: test<... (200; 14.803463ms)
Jan 25 22:59:08.195: INFO: (7) /api/v1/namespaces/proxy-5821/services/http:proxy-service-rnbg6:portname1/proxy/: foo (200; 15.130534ms)
Jan 25 22:59:08.196: INFO: (7) /api/v1/namespaces/proxy-5821/pods/http:proxy-service-rnbg6-fs5xx:160/proxy/: foo (200; 15.410137ms)
Jan 25 22:59:08.196: INFO: (7) /api/v1/namespaces/proxy-5821/pods/proxy-service-rnbg6-fs5xx:162/proxy/: bar (200; 15.637161ms)
Jan 25 22:59:08.203: INFO: (7) /api/v1/namespaces/proxy-5821/pods/proxy-service-rnbg6-fs5xx:160/proxy/: foo (200; 22.955552ms)
Jan 25 22:59:08.203: INFO: (7) /api/v1/namespaces/proxy-5821/pods/https:proxy-service-rnbg6-fs5xx:462/proxy/: tls qux (200; 23.184712ms)
Jan 25 22:59:08.203: INFO: (7) /api/v1/namespaces/proxy-5821/pods/proxy-service-rnbg6-fs5xx/proxy/: test (200; 23.060412ms)
Jan 25 22:59:08.203: INFO: (7) /api/v1/namespaces/proxy-5821/pods/http:proxy-service-rnbg6-fs5xx:162/proxy/: bar (200; 23.146285ms)
Jan 25 22:59:08.203: INFO: (7) /api/v1/namespaces/proxy-5821/pods/https:proxy-service-rnbg6-fs5xx:443/proxy/: test<... (200; 11.146988ms)
Jan 25 22:59:08.215: INFO: (8) /api/v1/namespaces/proxy-5821/pods/proxy-service-rnbg6-fs5xx:162/proxy/: bar (200; 11.913664ms)
Jan 25 22:59:08.216: INFO: (8) /api/v1/namespaces/proxy-5821/services/proxy-service-rnbg6:portname1/proxy/: foo (200; 12.490789ms)
Jan 25 22:59:08.217: INFO: (8) /api/v1/namespaces/proxy-5821/services/https:proxy-service-rnbg6:tlsportname2/proxy/: tls qux (200; 13.223799ms)
Jan 25 22:59:08.217: INFO: (8) /api/v1/namespaces/proxy-5821/pods/https:proxy-service-rnbg6-fs5xx:443/proxy/: test (200; 13.451464ms)
Jan 25 22:59:08.217: INFO: (8) /api/v1/namespaces/proxy-5821/services/proxy-service-rnbg6:portname2/proxy/: bar (200; 13.941246ms)
Jan 25 22:59:08.217: INFO: (8) /api/v1/namespaces/proxy-5821/services/http:proxy-service-rnbg6:portname1/proxy/: foo (200; 13.875328ms)
Jan 25 22:59:08.217: INFO: (8) /api/v1/namespaces/proxy-5821/pods/proxy-service-rnbg6-fs5xx:160/proxy/: foo (200; 13.877612ms)
Jan 25 22:59:08.218: INFO: (8) /api/v1/namespaces/proxy-5821/pods/http:proxy-service-rnbg6-fs5xx:160/proxy/: foo (200; 14.023661ms)
Jan 25 22:59:08.218: INFO: (8) /api/v1/namespaces/proxy-5821/services/https:proxy-service-rnbg6:tlsportname1/proxy/: tls baz (200; 14.131688ms)
Jan 25 22:59:08.218: INFO: (8) /api/v1/namespaces/proxy-5821/pods/https:proxy-service-rnbg6-fs5xx:460/proxy/: tls baz (200; 14.156632ms)
Jan 25 22:59:08.218: INFO: (8) /api/v1/namespaces/proxy-5821/pods/http:proxy-service-rnbg6-fs5xx:162/proxy/: bar (200; 14.322436ms)
Jan 25 22:59:08.218: INFO: (8) /api/v1/namespaces/proxy-5821/pods/http:proxy-service-rnbg6-fs5xx:1080/proxy/: ... (200; 14.168596ms)
Jan 25 22:59:08.223: INFO: (9) /api/v1/namespaces/proxy-5821/pods/https:proxy-service-rnbg6-fs5xx:462/proxy/: tls qux (200; 5.450215ms)
Jan 25 22:59:08.230: INFO: (9) /api/v1/namespaces/proxy-5821/pods/http:proxy-service-rnbg6-fs5xx:160/proxy/: foo (200; 11.942106ms)
Jan 25 22:59:08.230: INFO: (9) /api/v1/namespaces/proxy-5821/services/https:proxy-service-rnbg6:tlsportname2/proxy/: tls qux (200; 12.012512ms)
Jan 25 22:59:08.230: INFO: (9) /api/v1/namespaces/proxy-5821/pods/proxy-service-rnbg6-fs5xx:1080/proxy/: test<... (200; 11.748694ms)
Jan 25 22:59:08.230: INFO: (9) /api/v1/namespaces/proxy-5821/pods/http:proxy-service-rnbg6-fs5xx:162/proxy/: bar (200; 12.376104ms)
Jan 25 22:59:08.231: INFO: (9) /api/v1/namespaces/proxy-5821/pods/proxy-service-rnbg6-fs5xx:160/proxy/: foo (200; 12.522931ms)
Jan 25 22:59:08.231: INFO: (9) /api/v1/namespaces/proxy-5821/pods/proxy-service-rnbg6-fs5xx/proxy/: test (200; 12.526823ms)
Jan 25 22:59:08.231: INFO: (9) /api/v1/namespaces/proxy-5821/pods/proxy-service-rnbg6-fs5xx:162/proxy/: bar (200; 12.496895ms)
Jan 25 22:59:08.231: INFO: (9) /api/v1/namespaces/proxy-5821/pods/https:proxy-service-rnbg6-fs5xx:443/proxy/: ... (200; 14.212351ms)
Jan 25 22:59:08.232: INFO: (9) /api/v1/namespaces/proxy-5821/services/proxy-service-rnbg6:portname2/proxy/: bar (200; 14.347658ms)
Jan 25 22:59:08.232: INFO: (9) /api/v1/namespaces/proxy-5821/services/proxy-service-rnbg6:portname1/proxy/: foo (200; 14.592086ms)
Jan 25 22:59:08.233: INFO: (9) /api/v1/namespaces/proxy-5821/services/https:proxy-service-rnbg6:tlsportname1/proxy/: tls baz (200; 15.032101ms)
Jan 25 22:59:08.233: INFO: (9) /api/v1/namespaces/proxy-5821/services/http:proxy-service-rnbg6:portname1/proxy/: foo (200; 15.265162ms)
Jan 25 22:59:08.233: INFO: (9) /api/v1/namespaces/proxy-5821/services/http:proxy-service-rnbg6:portname2/proxy/: bar (200; 15.23218ms)
Jan 25 22:59:08.241: INFO: (10) /api/v1/namespaces/proxy-5821/pods/https:proxy-service-rnbg6-fs5xx:462/proxy/: tls qux (200; 7.659328ms)
Jan 25 22:59:08.242: INFO: (10) /api/v1/namespaces/proxy-5821/pods/proxy-service-rnbg6-fs5xx/proxy/: test (200; 8.314441ms)
Jan 25 22:59:08.244: INFO: (10) /api/v1/namespaces/proxy-5821/pods/https:proxy-service-rnbg6-fs5xx:443/proxy/: ... (200; 15.334702ms)
Jan 25 22:59:08.249: INFO: (10) /api/v1/namespaces/proxy-5821/pods/proxy-service-rnbg6-fs5xx:162/proxy/: bar (200; 15.574813ms)
Jan 25 22:59:08.250: INFO: (10) /api/v1/namespaces/proxy-5821/services/http:proxy-service-rnbg6:portname1/proxy/: foo (200; 16.700171ms)
Jan 25 22:59:08.251: INFO: (10) /api/v1/namespaces/proxy-5821/pods/proxy-service-rnbg6-fs5xx:1080/proxy/: test<... (200; 17.28942ms)
Jan 25 22:59:08.264: INFO: (11) /api/v1/namespaces/proxy-5821/services/https:proxy-service-rnbg6:tlsportname1/proxy/: tls baz (200; 13.329495ms)
Jan 25 22:59:08.264: INFO: (11) /api/v1/namespaces/proxy-5821/pods/https:proxy-service-rnbg6-fs5xx:462/proxy/: tls qux (200; 13.324333ms)
Jan 25 22:59:08.264: INFO: (11) /api/v1/namespaces/proxy-5821/services/proxy-service-rnbg6:portname1/proxy/: foo (200; 13.553874ms)
Jan 25 22:59:08.265: INFO: (11) /api/v1/namespaces/proxy-5821/services/http:proxy-service-rnbg6:portname1/proxy/: foo (200; 13.676833ms)
Jan 25 22:59:08.265: INFO: (11) /api/v1/namespaces/proxy-5821/pods/proxy-service-rnbg6-fs5xx/proxy/: test (200; 13.937944ms)
Jan 25 22:59:08.267: INFO: (11) /api/v1/namespaces/proxy-5821/pods/http:proxy-service-rnbg6-fs5xx:1080/proxy/: ... (200; 15.754344ms)
Jan 25 22:59:08.267: INFO: (11) /api/v1/namespaces/proxy-5821/pods/http:proxy-service-rnbg6-fs5xx:160/proxy/: foo (200; 15.697711ms)
Jan 25 22:59:08.267: INFO: (11) /api/v1/namespaces/proxy-5821/pods/https:proxy-service-rnbg6-fs5xx:443/proxy/: test<... (200; 17.375482ms)
Jan 25 22:59:08.268: INFO: (11) /api/v1/namespaces/proxy-5821/pods/https:proxy-service-rnbg6-fs5xx:460/proxy/: tls baz (200; 17.372485ms)
Jan 25 22:59:08.268: INFO: (11) /api/v1/namespaces/proxy-5821/services/http:proxy-service-rnbg6:portname2/proxy/: bar (200; 17.353349ms)
Jan 25 22:59:08.270: INFO: (11) /api/v1/namespaces/proxy-5821/pods/http:proxy-service-rnbg6-fs5xx:162/proxy/: bar (200; 18.971049ms)
Jan 25 22:59:08.270: INFO: (11) /api/v1/namespaces/proxy-5821/services/proxy-service-rnbg6:portname2/proxy/: bar (200; 19.17888ms)
Jan 25 22:59:08.276: INFO: (12) /api/v1/namespaces/proxy-5821/pods/proxy-service-rnbg6-fs5xx:162/proxy/: bar (200; 4.758688ms)
Jan 25 22:59:08.277: INFO: (12) /api/v1/namespaces/proxy-5821/pods/proxy-service-rnbg6-fs5xx:160/proxy/: foo (200; 6.383644ms)
Jan 25 22:59:08.283: INFO: (12) /api/v1/namespaces/proxy-5821/pods/http:proxy-service-rnbg6-fs5xx:1080/proxy/: ... (200; 10.843447ms)
Jan 25 22:59:08.283: INFO: (12) /api/v1/namespaces/proxy-5821/pods/https:proxy-service-rnbg6-fs5xx:443/proxy/: test<... (200; 11.825709ms)
Jan 25 22:59:08.284: INFO: (12) /api/v1/namespaces/proxy-5821/pods/https:proxy-service-rnbg6-fs5xx:460/proxy/: tls baz (200; 12.259464ms)
Jan 25 22:59:08.284: INFO: (12) /api/v1/namespaces/proxy-5821/pods/proxy-service-rnbg6-fs5xx/proxy/: test (200; 12.540171ms)
Jan 25 22:59:08.285: INFO: (12) /api/v1/namespaces/proxy-5821/services/http:proxy-service-rnbg6:portname1/proxy/: foo (200; 14.423311ms)
Jan 25 22:59:08.285: INFO: (12) /api/v1/namespaces/proxy-5821/services/http:proxy-service-rnbg6:portname2/proxy/: bar (200; 14.345487ms)
Jan 25 22:59:08.286: INFO: (12) /api/v1/namespaces/proxy-5821/services/https:proxy-service-rnbg6:tlsportname2/proxy/: tls qux (200; 14.427802ms)
Jan 25 22:59:08.286: INFO: (12) /api/v1/namespaces/proxy-5821/services/proxy-service-rnbg6:portname1/proxy/: foo (200; 14.371326ms)
Jan 25 22:59:08.286: INFO: (12) /api/v1/namespaces/proxy-5821/services/https:proxy-service-rnbg6:tlsportname1/proxy/: tls baz (200; 14.148692ms)
Jan 25 22:59:08.286: INFO: (12) /api/v1/namespaces/proxy-5821/services/proxy-service-rnbg6:portname2/proxy/: bar (200; 15.581254ms)
Jan 25 22:59:08.300: INFO: (13) /api/v1/namespaces/proxy-5821/pods/https:proxy-service-rnbg6-fs5xx:443/proxy/: test (200; 14.880544ms)
Jan 25 22:59:08.302: INFO: (13) /api/v1/namespaces/proxy-5821/pods/proxy-service-rnbg6-fs5xx:1080/proxy/: test<... (200; 15.0634ms)
Jan 25 22:59:08.302: INFO: (13) /api/v1/namespaces/proxy-5821/pods/http:proxy-service-rnbg6-fs5xx:160/proxy/: foo (200; 15.433143ms)
Jan 25 22:59:08.303: INFO: (13) /api/v1/namespaces/proxy-5821/pods/proxy-service-rnbg6-fs5xx:162/proxy/: bar (200; 16.040825ms)
Jan 25 22:59:08.303: INFO: (13) /api/v1/namespaces/proxy-5821/pods/https:proxy-service-rnbg6-fs5xx:460/proxy/: tls baz (200; 16.005809ms)
Jan 25 22:59:08.303: INFO: (13) /api/v1/namespaces/proxy-5821/services/http:proxy-service-rnbg6:portname2/proxy/: bar (200; 15.989662ms)
Jan 25 22:59:08.303: INFO: (13) /api/v1/namespaces/proxy-5821/pods/http:proxy-service-rnbg6-fs5xx:1080/proxy/: ... (200; 16.548511ms)
Jan 25 22:59:08.305: INFO: (13) /api/v1/namespaces/proxy-5821/services/http:proxy-service-rnbg6:portname1/proxy/: foo (200; 17.601527ms)
Jan 25 22:59:08.305: INFO: (13) /api/v1/namespaces/proxy-5821/services/proxy-service-rnbg6:portname1/proxy/: foo (200; 18.407648ms)
Jan 25 22:59:08.305: INFO: (13) /api/v1/namespaces/proxy-5821/services/https:proxy-service-rnbg6:tlsportname1/proxy/: tls baz (200; 18.238623ms)
Jan 25 22:59:08.305: INFO: (13) /api/v1/namespaces/proxy-5821/services/https:proxy-service-rnbg6:tlsportname2/proxy/: tls qux (200; 18.209571ms)
Jan 25 22:59:08.308: INFO: (13) /api/v1/namespaces/proxy-5821/services/proxy-service-rnbg6:portname2/proxy/: bar (200; 21.398261ms)
Jan 25 22:59:08.317: INFO: (14) /api/v1/namespaces/proxy-5821/pods/proxy-service-rnbg6-fs5xx/proxy/: test (200; 7.834984ms)
Jan 25 22:59:08.318: INFO: (14) /api/v1/namespaces/proxy-5821/services/proxy-service-rnbg6:portname1/proxy/: foo (200; 9.738228ms)
Jan 25 22:59:08.318: INFO: (14) /api/v1/namespaces/proxy-5821/pods/http:proxy-service-rnbg6-fs5xx:162/proxy/: bar (200; 9.177384ms)
Jan 25 22:59:08.319: INFO: (14) /api/v1/namespaces/proxy-5821/services/proxy-service-rnbg6:portname2/proxy/: bar (200; 9.777028ms)
Jan 25 22:59:08.319: INFO: (14) /api/v1/namespaces/proxy-5821/pods/https:proxy-service-rnbg6-fs5xx:462/proxy/: tls qux (200; 10.153066ms)
Jan 25 22:59:08.321: INFO: (14) /api/v1/namespaces/proxy-5821/pods/https:proxy-service-rnbg6-fs5xx:443/proxy/: test<... (200; 12.693323ms)
Jan 25 22:59:08.321: INFO: (14) /api/v1/namespaces/proxy-5821/pods/https:proxy-service-rnbg6-fs5xx:460/proxy/: tls baz (200; 12.412344ms)
Jan 25 22:59:08.321: INFO: (14) /api/v1/namespaces/proxy-5821/pods/http:proxy-service-rnbg6-fs5xx:1080/proxy/: ... (200; 12.730927ms)
Jan 25 22:59:08.321: INFO: (14) /api/v1/namespaces/proxy-5821/pods/proxy-service-rnbg6-fs5xx:160/proxy/: foo (200; 12.654907ms)
Jan 25 22:59:08.322: INFO: (14) /api/v1/namespaces/proxy-5821/pods/http:proxy-service-rnbg6-fs5xx:160/proxy/: foo (200; 12.957742ms)
Jan 25 22:59:08.323: INFO: (14) /api/v1/namespaces/proxy-5821/services/http:proxy-service-rnbg6:portname2/proxy/: bar (200; 13.909646ms)
Jan 25 22:59:08.323: INFO: (14) /api/v1/namespaces/proxy-5821/services/http:proxy-service-rnbg6:portname1/proxy/: foo (200; 13.691887ms)
Jan 25 22:59:08.323: INFO: (14) /api/v1/namespaces/proxy-5821/services/https:proxy-service-rnbg6:tlsportname1/proxy/: tls baz (200; 14.548178ms)
Jan 25 22:59:08.323: INFO: (14) /api/v1/namespaces/proxy-5821/pods/proxy-service-rnbg6-fs5xx:162/proxy/: bar (200; 14.447237ms)
Jan 25 22:59:08.323: INFO: (14) /api/v1/namespaces/proxy-5821/services/https:proxy-service-rnbg6:tlsportname2/proxy/: tls qux (200; 14.699549ms)
Jan 25 22:59:08.329: INFO: (15) /api/v1/namespaces/proxy-5821/pods/https:proxy-service-rnbg6-fs5xx:443/proxy/: ... (200; 10.462632ms)
Jan 25 22:59:08.335: INFO: (15) /api/v1/namespaces/proxy-5821/pods/proxy-service-rnbg6-fs5xx:162/proxy/: bar (200; 11.188309ms)
Jan 25 22:59:08.336: INFO: (15) /api/v1/namespaces/proxy-5821/pods/proxy-service-rnbg6-fs5xx/proxy/: test (200; 11.526023ms)
Jan 25 22:59:08.336: INFO: (15) /api/v1/namespaces/proxy-5821/pods/https:proxy-service-rnbg6-fs5xx:460/proxy/: tls baz (200; 12.749733ms)
Jan 25 22:59:08.337: INFO: (15) /api/v1/namespaces/proxy-5821/pods/proxy-service-rnbg6-fs5xx:1080/proxy/: test<... (200; 12.660193ms)
Jan 25 22:59:08.338: INFO: (15) /api/v1/namespaces/proxy-5821/services/http:proxy-service-rnbg6:portname2/proxy/: bar (200; 13.385051ms)
Jan 25 22:59:08.338: INFO: (15) /api/v1/namespaces/proxy-5821/services/https:proxy-service-rnbg6:tlsportname1/proxy/: tls baz (200; 13.893404ms)
Jan 25 22:59:08.338: INFO: (15) /api/v1/namespaces/proxy-5821/services/http:proxy-service-rnbg6:portname1/proxy/: foo (200; 13.588018ms)
Jan 25 22:59:08.339: INFO: (15) /api/v1/namespaces/proxy-5821/pods/http:proxy-service-rnbg6-fs5xx:162/proxy/: bar (200; 14.824439ms)
Jan 25 22:59:08.344: INFO: (15) /api/v1/namespaces/proxy-5821/services/proxy-service-rnbg6:portname1/proxy/: foo (200; 20.265795ms)
Jan 25 22:59:08.346: INFO: (15) /api/v1/namespaces/proxy-5821/services/https:proxy-service-rnbg6:tlsportname2/proxy/: tls qux (200; 22.225958ms)
Jan 25 22:59:08.347: INFO: (15) /api/v1/namespaces/proxy-5821/services/proxy-service-rnbg6:portname2/proxy/: bar (200; 22.842742ms)
Jan 25 22:59:08.364: INFO: (16) /api/v1/namespaces/proxy-5821/services/proxy-service-rnbg6:portname2/proxy/: bar (200; 17.188188ms)
Jan 25 22:59:08.366: INFO: (16) /api/v1/namespaces/proxy-5821/services/http:proxy-service-rnbg6:portname2/proxy/: bar (200; 18.94053ms)
Jan 25 22:59:08.368: INFO: (16) /api/v1/namespaces/proxy-5821/pods/http:proxy-service-rnbg6-fs5xx:162/proxy/: bar (200; 20.681408ms)
Jan 25 22:59:08.368: INFO: (16) /api/v1/namespaces/proxy-5821/services/https:proxy-service-rnbg6:tlsportname2/proxy/: tls qux (200; 20.687066ms)
Jan 25 22:59:08.368: INFO: (16) /api/v1/namespaces/proxy-5821/services/http:proxy-service-rnbg6:portname1/proxy/: foo (200; 20.748476ms)
Jan 25 22:59:08.368: INFO: (16) /api/v1/namespaces/proxy-5821/services/https:proxy-service-rnbg6:tlsportname1/proxy/: tls baz (200; 21.064552ms)
Jan 25 22:59:08.368: INFO: (16) /api/v1/namespaces/proxy-5821/pods/proxy-service-rnbg6-fs5xx:160/proxy/: foo (200; 21.325003ms)
Jan 25 22:59:08.368: INFO: (16) /api/v1/namespaces/proxy-5821/pods/https:proxy-service-rnbg6-fs5xx:462/proxy/: tls qux (200; 21.298704ms)
Jan 25 22:59:08.368: INFO: (16) /api/v1/namespaces/proxy-5821/pods/http:proxy-service-rnbg6-fs5xx:1080/proxy/: ... (200; 21.632923ms)
Jan 25 22:59:08.368: INFO: (16) /api/v1/namespaces/proxy-5821/pods/proxy-service-rnbg6-fs5xx:1080/proxy/: test<... (200; 21.256884ms)
Jan 25 22:59:08.369: INFO: (16) /api/v1/namespaces/proxy-5821/pods/https:proxy-service-rnbg6-fs5xx:460/proxy/: tls baz (200; 21.490207ms)
Jan 25 22:59:08.369: INFO: (16) /api/v1/namespaces/proxy-5821/pods/http:proxy-service-rnbg6-fs5xx:160/proxy/: foo (200; 21.985101ms)
Jan 25 22:59:08.369: INFO: (16) /api/v1/namespaces/proxy-5821/services/proxy-service-rnbg6:portname1/proxy/: foo (200; 21.728099ms)
Jan 25 22:59:08.369: INFO: (16) /api/v1/namespaces/proxy-5821/pods/proxy-service-rnbg6-fs5xx:162/proxy/: bar (200; 22.159709ms)
Jan 25 22:59:08.369: INFO: (16) /api/v1/namespaces/proxy-5821/pods/https:proxy-service-rnbg6-fs5xx:443/proxy/: test (200; 23.291802ms)
Jan 25 22:59:08.377: INFO: (17) /api/v1/namespaces/proxy-5821/pods/http:proxy-service-rnbg6-fs5xx:160/proxy/: foo (200; 6.452064ms)
Jan 25 22:59:08.377: INFO: (17) /api/v1/namespaces/proxy-5821/pods/http:proxy-service-rnbg6-fs5xx:162/proxy/: bar (200; 6.530471ms)
Jan 25 22:59:08.382: INFO: (17) /api/v1/namespaces/proxy-5821/services/http:proxy-service-rnbg6:portname1/proxy/: foo (200; 11.535245ms)
Jan 25 22:59:08.382: INFO: (17) /api/v1/namespaces/proxy-5821/pods/http:proxy-service-rnbg6-fs5xx:1080/proxy/: ... (200; 11.419512ms)
Jan 25 22:59:08.382: INFO: (17) /api/v1/namespaces/proxy-5821/pods/proxy-service-rnbg6-fs5xx:1080/proxy/: test<... (200; 11.938495ms)
Jan 25 22:59:08.383: INFO: (17) /api/v1/namespaces/proxy-5821/pods/https:proxy-service-rnbg6-fs5xx:443/proxy/: test (200; 13.587209ms)
Jan 25 22:59:08.384: INFO: (17) /api/v1/namespaces/proxy-5821/services/proxy-service-rnbg6:portname2/proxy/: bar (200; 13.844386ms)
Jan 25 22:59:08.386: INFO: (17) /api/v1/namespaces/proxy-5821/services/https:proxy-service-rnbg6:tlsportname1/proxy/: tls baz (200; 15.178556ms)
Jan 25 22:59:08.390: INFO: (18) /api/v1/namespaces/proxy-5821/pods/http:proxy-service-rnbg6-fs5xx:162/proxy/: bar (200; 4.369872ms)
Jan 25 22:59:08.395: INFO: (18) /api/v1/namespaces/proxy-5821/pods/https:proxy-service-rnbg6-fs5xx:462/proxy/: tls qux (200; 8.640564ms)
Jan 25 22:59:08.395: INFO: (18) /api/v1/namespaces/proxy-5821/pods/http:proxy-service-rnbg6-fs5xx:1080/proxy/: ... (200; 9.040806ms)
Jan 25 22:59:08.395: INFO: (18) /api/v1/namespaces/proxy-5821/pods/proxy-service-rnbg6-fs5xx:1080/proxy/: test<... (200; 9.316517ms)
Jan 25 22:59:08.395: INFO: (18) /api/v1/namespaces/proxy-5821/pods/proxy-service-rnbg6-fs5xx:160/proxy/: foo (200; 9.379212ms)
Jan 25 22:59:08.396: INFO: (18) /api/v1/namespaces/proxy-5821/services/http:proxy-service-rnbg6:portname2/proxy/: bar (200; 10.313305ms)
Jan 25 22:59:08.397: INFO: (18) /api/v1/namespaces/proxy-5821/services/proxy-service-rnbg6:portname2/proxy/: bar (200; 10.655085ms)
Jan 25 22:59:08.397: INFO: (18) /api/v1/namespaces/proxy-5821/services/http:proxy-service-rnbg6:portname1/proxy/: foo (200; 10.867183ms)
Jan 25 22:59:08.397: INFO: (18) /api/v1/namespaces/proxy-5821/services/proxy-service-rnbg6:portname1/proxy/: foo (200; 10.945627ms)
Jan 25 22:59:08.398: INFO: (18) /api/v1/namespaces/proxy-5821/services/https:proxy-service-rnbg6:tlsportname1/proxy/: tls baz (200; 11.466415ms)
Jan 25 22:59:08.399: INFO: (18) /api/v1/namespaces/proxy-5821/services/https:proxy-service-rnbg6:tlsportname2/proxy/: tls qux (200; 13.366768ms)
Jan 25 22:59:08.399: INFO: (18) /api/v1/namespaces/proxy-5821/pods/proxy-service-rnbg6-fs5xx:162/proxy/: bar (200; 13.503447ms)
Jan 25 22:59:08.399: INFO: (18) /api/v1/namespaces/proxy-5821/pods/https:proxy-service-rnbg6-fs5xx:443/proxy/: test (200; 13.625047ms)
Jan 25 22:59:08.400: INFO: (18) /api/v1/namespaces/proxy-5821/pods/http:proxy-service-rnbg6-fs5xx:160/proxy/: foo (200; 13.612707ms)
Jan 25 22:59:08.401: INFO: (18) /api/v1/namespaces/proxy-5821/pods/https:proxy-service-rnbg6-fs5xx:460/proxy/: tls baz (200; 14.518461ms)
Jan 25 22:59:08.410: INFO: (19) /api/v1/namespaces/proxy-5821/pods/proxy-service-rnbg6-fs5xx/proxy/: test (200; 8.756396ms)
Jan 25 22:59:08.413: INFO: (19) /api/v1/namespaces/proxy-5821/services/http:proxy-service-rnbg6:portname1/proxy/: foo (200; 11.64746ms)
Jan 25 22:59:08.413: INFO: (19) /api/v1/namespaces/proxy-5821/pods/https:proxy-service-rnbg6-fs5xx:460/proxy/: tls baz (200; 11.860917ms)
Jan 25 22:59:08.413: INFO: (19) /api/v1/namespaces/proxy-5821/pods/http:proxy-service-rnbg6-fs5xx:1080/proxy/: ... (200; 12.534099ms)
Jan 25 22:59:08.413: INFO: (19) /api/v1/namespaces/proxy-5821/pods/https:proxy-service-rnbg6-fs5xx:462/proxy/: tls qux (200; 12.24123ms)
Jan 25 22:59:08.413: INFO: (19) /api/v1/namespaces/proxy-5821/pods/http:proxy-service-rnbg6-fs5xx:160/proxy/: foo (200; 12.303132ms)
Jan 25 22:59:08.414: INFO: (19) /api/v1/namespaces/proxy-5821/services/proxy-service-rnbg6:portname2/proxy/: bar (200; 12.819988ms)
Jan 25 22:59:08.414: INFO: (19) /api/v1/namespaces/proxy-5821/services/proxy-service-rnbg6:portname1/proxy/: foo (200; 13.022529ms)
Jan 25 22:59:08.414: INFO: (19) /api/v1/namespaces/proxy-5821/services/http:proxy-service-rnbg6:portname2/proxy/: bar (200; 13.01484ms)
Jan 25 22:59:08.415: INFO: (19) /api/v1/namespaces/proxy-5821/pods/https:proxy-service-rnbg6-fs5xx:443/proxy/: test<... (200; 14.994246ms)
Jan 25 22:59:08.419: INFO: (19) /api/v1/namespaces/proxy-5821/pods/proxy-service-rnbg6-fs5xx:160/proxy/: foo (200; 18.104005ms)
STEP: deleting ReplicationController proxy-service-rnbg6 in namespace proxy-5821, will wait for the garbage collector to delete the pods
Jan 25 22:59:08.514: INFO: Deleting ReplicationController proxy-service-rnbg6 took: 37.742183ms
Jan 25 22:59:08.915: INFO: Terminating ReplicationController proxy-service-rnbg6 pods took: 400.685716ms
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:59:22.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-5821" for this suite.

• [SLOW TEST:28.705 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:57
    should proxy through a service and a pod  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","total":278,"completed":248,"skipped":4132,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:59:22.438: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-map-9e080b4f-3189-4d85-9b86-87a6f26c3f0c
STEP: Creating a pod to test consume configMaps
Jan 25 22:59:22.546: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d33ccd06-1114-47b9-a02c-dabe46aa805e" in namespace "projected-7220" to be "success or failure"
Jan 25 22:59:22.555: INFO: Pod "pod-projected-configmaps-d33ccd06-1114-47b9-a02c-dabe46aa805e": Phase="Pending", Reason="", readiness=false. Elapsed: 7.387751ms
Jan 25 22:59:24.621: INFO: Pod "pod-projected-configmaps-d33ccd06-1114-47b9-a02c-dabe46aa805e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0741752s
Jan 25 22:59:26.637: INFO: Pod "pod-projected-configmaps-d33ccd06-1114-47b9-a02c-dabe46aa805e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.089702605s
Jan 25 22:59:28.667: INFO: Pod "pod-projected-configmaps-d33ccd06-1114-47b9-a02c-dabe46aa805e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.1199483s
Jan 25 22:59:30.718: INFO: Pod "pod-projected-configmaps-d33ccd06-1114-47b9-a02c-dabe46aa805e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.171256282s
Jan 25 22:59:32.730: INFO: Pod "pod-projected-configmaps-d33ccd06-1114-47b9-a02c-dabe46aa805e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.183091166s
STEP: Saw pod success
Jan 25 22:59:32.731: INFO: Pod "pod-projected-configmaps-d33ccd06-1114-47b9-a02c-dabe46aa805e" satisfied condition "success or failure"
Jan 25 22:59:32.735: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-d33ccd06-1114-47b9-a02c-dabe46aa805e container projected-configmap-volume-test: 
STEP: delete the pod
Jan 25 22:59:32.791: INFO: Waiting for pod pod-projected-configmaps-d33ccd06-1114-47b9-a02c-dabe46aa805e to disappear
Jan 25 22:59:32.796: INFO: Pod pod-projected-configmaps-d33ccd06-1114-47b9-a02c-dabe46aa805e no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:59:32.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7220" for this suite.

• [SLOW TEST:10.385 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":249,"skipped":4132,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:59:32.823: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name cm-test-opt-del-eba48c89-41af-47f2-b027-b94a842f8b0c
STEP: Creating configMap with name cm-test-opt-upd-941a2fe0-3ca2-4637-96e7-062a8890ffa5
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-eba48c89-41af-47f2-b027-b94a842f8b0c
STEP: Updating configmap cm-test-opt-upd-941a2fe0-3ca2-4637-96e7-062a8890ffa5
STEP: Creating configMap with name cm-test-opt-create-278431e4-1b4d-400f-8fc4-6f4b0d6a9ba1
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:59:47.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6856" for this suite.

• [SLOW TEST:14.393 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":250,"skipped":4152,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:59:47.217: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 25 22:59:47.457: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"d69ca8dd-299b-428f-8ef0-ad552bdcdb84", Controller:(*bool)(0xc000ca9352), BlockOwnerDeletion:(*bool)(0xc000ca9353)}}
Jan 25 22:59:47.472: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"abb40a29-e137-491b-b885-63d9d8a38edc", Controller:(*bool)(0xc000ca94d6), BlockOwnerDeletion:(*bool)(0xc000ca94d7)}}
Jan 25 22:59:47.527: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"011208a2-a767-477a-8c96-f9a3181ccac9", Controller:(*bool)(0xc00414f512), BlockOwnerDeletion:(*bool)(0xc00414f513)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 22:59:52.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-351" for this suite.

• [SLOW TEST:5.474 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":278,"completed":251,"skipped":4157,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD without validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 22:59:52.693: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD without validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 25 22:59:52.889: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Jan 25 22:59:56.121: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8345 create -f -'
Jan 25 22:59:59.567: INFO: stderr: ""
Jan 25 22:59:59.567: INFO: stdout: "e2e-test-crd-publish-openapi-1858-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n"
Jan 25 22:59:59.568: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8345 delete e2e-test-crd-publish-openapi-1858-crds test-cr'
Jan 25 22:59:59.770: INFO: stderr: ""
Jan 25 22:59:59.770: INFO: stdout: "e2e-test-crd-publish-openapi-1858-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n"
Jan 25 22:59:59.771: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8345 apply -f -'
Jan 25 23:00:00.065: INFO: stderr: ""
Jan 25 23:00:00.065: INFO: stdout: "e2e-test-crd-publish-openapi-1858-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n"
Jan 25 23:00:00.065: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8345 delete e2e-test-crd-publish-openapi-1858-crds test-cr'
Jan 25 23:00:00.179: INFO: stderr: ""
Jan 25 23:00:00.179: INFO: stdout: "e2e-test-crd-publish-openapi-1858-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR without validation schema
Jan 25 23:00:00.180: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1858-crds'
Jan 25 23:00:00.585: INFO: stderr: ""
Jan 25 23:00:00.586: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-1858-crd\nVERSION:  crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n     \n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 23:00:02.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-8345" for this suite.

• [SLOW TEST:9.908 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD without validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":278,"completed":252,"skipped":4159,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 23:00:02.603: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod pod-subpath-test-secret-7zmw
STEP: Creating a pod to test atomic-volume-subpath
Jan 25 23:00:02.828: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-7zmw" in namespace "subpath-753" to be "success or failure"
Jan 25 23:00:02.857: INFO: Pod "pod-subpath-test-secret-7zmw": Phase="Pending", Reason="", readiness=false. Elapsed: 28.871469ms
Jan 25 23:00:04.865: INFO: Pod "pod-subpath-test-secret-7zmw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037318129s
Jan 25 23:00:06.876: INFO: Pod "pod-subpath-test-secret-7zmw": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04765876s
Jan 25 23:00:08.891: INFO: Pod "pod-subpath-test-secret-7zmw": Phase="Pending", Reason="", readiness=false. Elapsed: 6.062747205s
Jan 25 23:00:10.898: INFO: Pod "pod-subpath-test-secret-7zmw": Phase="Running", Reason="", readiness=true. Elapsed: 8.070447714s
Jan 25 23:00:12.906: INFO: Pod "pod-subpath-test-secret-7zmw": Phase="Running", Reason="", readiness=true. Elapsed: 10.077999814s
Jan 25 23:00:14.917: INFO: Pod "pod-subpath-test-secret-7zmw": Phase="Running", Reason="", readiness=true. Elapsed: 12.089162325s
Jan 25 23:00:16.927: INFO: Pod "pod-subpath-test-secret-7zmw": Phase="Running", Reason="", readiness=true. Elapsed: 14.098697229s
Jan 25 23:00:18.954: INFO: Pod "pod-subpath-test-secret-7zmw": Phase="Running", Reason="", readiness=true. Elapsed: 16.125790193s
Jan 25 23:00:20.964: INFO: Pod "pod-subpath-test-secret-7zmw": Phase="Running", Reason="", readiness=true. Elapsed: 18.136445976s
Jan 25 23:00:23.339: INFO: Pod "pod-subpath-test-secret-7zmw": Phase="Running", Reason="", readiness=true. Elapsed: 20.510590199s
Jan 25 23:00:25.349: INFO: Pod "pod-subpath-test-secret-7zmw": Phase="Running", Reason="", readiness=true. Elapsed: 22.520520732s
Jan 25 23:00:27.355: INFO: Pod "pod-subpath-test-secret-7zmw": Phase="Running", Reason="", readiness=true. Elapsed: 24.52736265s
Jan 25 23:00:29.364: INFO: Pod "pod-subpath-test-secret-7zmw": Phase="Running", Reason="", readiness=true. Elapsed: 26.535553999s
Jan 25 23:00:31.369: INFO: Pod "pod-subpath-test-secret-7zmw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.541001781s
STEP: Saw pod success
Jan 25 23:00:31.369: INFO: Pod "pod-subpath-test-secret-7zmw" satisfied condition "success or failure"
Jan 25 23:00:31.373: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-secret-7zmw container test-container-subpath-secret-7zmw: 
STEP: delete the pod
Jan 25 23:00:31.428: INFO: Waiting for pod pod-subpath-test-secret-7zmw to disappear
Jan 25 23:00:31.440: INFO: Pod pod-subpath-test-secret-7zmw no longer exists
STEP: Deleting pod pod-subpath-test-secret-7zmw
Jan 25 23:00:31.440: INFO: Deleting pod "pod-subpath-test-secret-7zmw" in namespace "subpath-753"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 23:00:31.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-753" for this suite.

• [SLOW TEST:28.897 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with secret pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":278,"completed":253,"skipped":4169,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 23:00:31.501: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to change the type from ExternalName to ClusterIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a service externalname-service with the type=ExternalName in namespace services-7092
STEP: changing the ExternalName service to type=ClusterIP
STEP: creating replication controller externalname-service in namespace services-7092
I0125 23:00:31.912560       8 runners.go:189] Created replication controller with name: externalname-service, namespace: services-7092, replica count: 2
I0125 23:00:34.964154       8 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0125 23:00:37.964811       8 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0125 23:00:40.965449       8 runners.go:189] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0125 23:00:43.966377       8 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jan 25 23:00:43.966: INFO: Creating new exec pod
Jan 25 23:00:52.994: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7092 execpodtr2hj -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Jan 25 23:00:53.349: INFO: stderr: "I0125 23:00:53.192736    4720 log.go:172] (0xc000744630) (0xc00075c320) Create stream\nI0125 23:00:53.192937    4720 log.go:172] (0xc000744630) (0xc00075c320) Stream added, broadcasting: 1\nI0125 23:00:53.195904    4720 log.go:172] (0xc000744630) Reply frame received for 1\nI0125 23:00:53.195977    4720 log.go:172] (0xc000744630) (0xc00059e640) Create stream\nI0125 23:00:53.195990    4720 log.go:172] (0xc000744630) (0xc00059e640) Stream added, broadcasting: 3\nI0125 23:00:53.197043    4720 log.go:172] (0xc000744630) Reply frame received for 3\nI0125 23:00:53.197076    4720 log.go:172] (0xc000744630) (0xc0006a4aa0) Create stream\nI0125 23:00:53.197085    4720 log.go:172] (0xc000744630) (0xc0006a4aa0) Stream added, broadcasting: 5\nI0125 23:00:53.197943    4720 log.go:172] (0xc000744630) Reply frame received for 5\nI0125 23:00:53.255421    4720 log.go:172] (0xc000744630) Data frame received for 5\nI0125 23:00:53.255470    4720 log.go:172] (0xc0006a4aa0) (5) Data frame handling\nI0125 23:00:53.255484    4720 log.go:172] (0xc0006a4aa0) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0125 23:00:53.267207    4720 log.go:172] (0xc000744630) Data frame received for 5\nI0125 23:00:53.267241    4720 log.go:172] (0xc0006a4aa0) (5) Data frame handling\nI0125 23:00:53.267262    4720 log.go:172] (0xc0006a4aa0) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0125 23:00:53.339272    4720 log.go:172] (0xc000744630) (0xc00059e640) Stream removed, broadcasting: 3\nI0125 23:00:53.339604    4720 log.go:172] (0xc000744630) Data frame received for 1\nI0125 23:00:53.339639    4720 log.go:172] (0xc00075c320) (1) Data frame handling\nI0125 23:00:53.339799    4720 log.go:172] (0xc00075c320) (1) Data frame sent\nI0125 23:00:53.339866    4720 log.go:172] (0xc000744630) (0xc00075c320) Stream removed, broadcasting: 1\nI0125 23:00:53.341052    4720 log.go:172] (0xc000744630) (0xc0006a4aa0) Stream removed, broadcasting: 5\nI0125 23:00:53.341156    4720 log.go:172] (0xc000744630) Go away received\nI0125 23:00:53.341349    4720 log.go:172] (0xc000744630) (0xc00075c320) Stream removed, broadcasting: 1\nI0125 23:00:53.341394    4720 log.go:172] (0xc000744630) (0xc00059e640) Stream removed, broadcasting: 3\nI0125 23:00:53.341405    4720 log.go:172] (0xc000744630) (0xc0006a4aa0) Stream removed, broadcasting: 5\n"
Jan 25 23:00:53.350: INFO: stdout: ""
Jan 25 23:00:53.351: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7092 execpodtr2hj -- /bin/sh -x -c nc -zv -t -w 2 10.96.36.222 80'
Jan 25 23:00:53.677: INFO: stderr: "I0125 23:00:53.481030    4743 log.go:172] (0xc00010ea50) (0xc0006d19a0) Create stream\nI0125 23:00:53.481181    4743 log.go:172] (0xc00010ea50) (0xc0006d19a0) Stream added, broadcasting: 1\nI0125 23:00:53.483520    4743 log.go:172] (0xc00010ea50) Reply frame received for 1\nI0125 23:00:53.483545    4743 log.go:172] (0xc00010ea50) (0xc0008f8000) Create stream\nI0125 23:00:53.483553    4743 log.go:172] (0xc00010ea50) (0xc0008f8000) Stream added, broadcasting: 3\nI0125 23:00:53.484312    4743 log.go:172] (0xc00010ea50) Reply frame received for 3\nI0125 23:00:53.484336    4743 log.go:172] (0xc00010ea50) (0xc0001c8000) Create stream\nI0125 23:00:53.484344    4743 log.go:172] (0xc00010ea50) (0xc0001c8000) Stream added, broadcasting: 5\nI0125 23:00:53.485356    4743 log.go:172] (0xc00010ea50) Reply frame received for 5\nI0125 23:00:53.550117    4743 log.go:172] (0xc00010ea50) Data frame received for 5\nI0125 23:00:53.550218    4743 log.go:172] (0xc0001c8000) (5) Data frame handling\nI0125 23:00:53.550239    4743 log.go:172] (0xc0001c8000) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.36.222 80\nI0125 23:00:53.551116    4743 log.go:172] (0xc00010ea50) Data frame received for 5\nI0125 23:00:53.551166    4743 log.go:172] (0xc0001c8000) (5) Data frame handling\nI0125 23:00:53.551191    4743 log.go:172] (0xc0001c8000) (5) Data frame sent\nConnection to 10.96.36.222 80 port [tcp/http] succeeded!\nI0125 23:00:53.665250    4743 log.go:172] (0xc00010ea50) (0xc0001c8000) Stream removed, broadcasting: 5\nI0125 23:00:53.665570    4743 log.go:172] (0xc00010ea50) (0xc0008f8000) Stream removed, broadcasting: 3\nI0125 23:00:53.665739    4743 log.go:172] (0xc00010ea50) Data frame received for 1\nI0125 23:00:53.665814    4743 log.go:172] (0xc0006d19a0) (1) Data frame handling\nI0125 23:00:53.665868    4743 log.go:172] (0xc0006d19a0) (1) Data frame sent\nI0125 23:00:53.665885    4743 log.go:172] (0xc00010ea50) (0xc0006d19a0) Stream removed, broadcasting: 1\nI0125 23:00:53.666093    4743 log.go:172] (0xc00010ea50) Go away received\nI0125 23:00:53.668158    4743 log.go:172] (0xc00010ea50) (0xc0006d19a0) Stream removed, broadcasting: 1\nI0125 23:00:53.668179    4743 log.go:172] (0xc00010ea50) (0xc0008f8000) Stream removed, broadcasting: 3\nI0125 23:00:53.668197    4743 log.go:172] (0xc00010ea50) (0xc0001c8000) Stream removed, broadcasting: 5\n"
Jan 25 23:00:53.678: INFO: stdout: ""
Jan 25 23:00:53.678: INFO: Cleaning up the ExternalName to ClusterIP test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 23:00:53.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-7092" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:22.244 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":278,"completed":254,"skipped":4179,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 23:00:53.748: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 25 23:00:54.662: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 25 23:00:56.683: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715590054, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715590054, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715590054, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715590054, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 23:00:58.688: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715590054, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715590054, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715590054, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715590054, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 23:01:00.691: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715590054, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715590054, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715590054, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715590054, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 23:01:02.965: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715590054, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715590054, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715590054, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715590054, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 23:01:04.867: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715590054, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715590054, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715590054, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715590054, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 23:01:06.694: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715590054, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715590054, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715590054, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715590054, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 25 23:01:09.745: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API
STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API
STEP: Creating a dummy validating-webhook-configuration object
STEP: Deleting the validating-webhook-configuration, which should be possible to remove
STEP: Creating a dummy mutating-webhook-configuration object
STEP: Deleting the mutating-webhook-configuration, which should be possible to remove
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 23:01:09.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-986" for this suite.
STEP: Destroying namespace "webhook-986-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:16.440 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":278,"completed":255,"skipped":4195,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 23:01:10.188: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Performing setup for networking test in namespace pod-network-test-7703
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan 25 23:01:10.262: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan 25 23:01:46.475: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostname&protocol=udp&host=10.44.0.1&port=8081&tries=1'] Namespace:pod-network-test-7703 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 25 23:01:46.475: INFO: >>> kubeConfig: /root/.kube/config
I0125 23:01:46.556836       8 log.go:172] (0xc0069b26e0) (0xc0016f5900) Create stream
I0125 23:01:46.557505       8 log.go:172] (0xc0069b26e0) (0xc0016f5900) Stream added, broadcasting: 1
I0125 23:01:46.565030       8 log.go:172] (0xc0069b26e0) Reply frame received for 1
I0125 23:01:46.565175       8 log.go:172] (0xc0069b26e0) (0xc0018b0960) Create stream
I0125 23:01:46.565188       8 log.go:172] (0xc0069b26e0) (0xc0018b0960) Stream added, broadcasting: 3
I0125 23:01:46.572606       8 log.go:172] (0xc0069b26e0) Reply frame received for 3
I0125 23:01:46.572770       8 log.go:172] (0xc0069b26e0) (0xc0016f5ea0) Create stream
I0125 23:01:46.572809       8 log.go:172] (0xc0069b26e0) (0xc0016f5ea0) Stream added, broadcasting: 5
I0125 23:01:46.575390       8 log.go:172] (0xc0069b26e0) Reply frame received for 5
I0125 23:01:46.708960       8 log.go:172] (0xc0069b26e0) Data frame received for 3
I0125 23:01:46.709100       8 log.go:172] (0xc0018b0960) (3) Data frame handling
I0125 23:01:46.709125       8 log.go:172] (0xc0018b0960) (3) Data frame sent
I0125 23:01:46.789735       8 log.go:172] (0xc0069b26e0) (0xc0018b0960) Stream removed, broadcasting: 3
I0125 23:01:46.789949       8 log.go:172] (0xc0069b26e0) Data frame received for 1
I0125 23:01:46.789976       8 log.go:172] (0xc0016f5900) (1) Data frame handling
I0125 23:01:46.790013       8 log.go:172] (0xc0016f5900) (1) Data frame sent
I0125 23:01:46.790073       8 log.go:172] (0xc0069b26e0) (0xc0016f5900) Stream removed, broadcasting: 1
I0125 23:01:46.790421       8 log.go:172] (0xc0069b26e0) (0xc0016f5ea0) Stream removed, broadcasting: 5
I0125 23:01:46.790471       8 log.go:172] (0xc0069b26e0) Go away received
I0125 23:01:46.791106       8 log.go:172] (0xc0069b26e0) (0xc0016f5900) Stream removed, broadcasting: 1
I0125 23:01:46.791201       8 log.go:172] (0xc0069b26e0) (0xc0018b0960) Stream removed, broadcasting: 3
I0125 23:01:46.791247       8 log.go:172] (0xc0069b26e0) (0xc0016f5ea0) Stream removed, broadcasting: 5
Jan 25 23:01:46.791: INFO: Waiting for responses: map[]
Jan 25 23:01:46.797: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostname&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:pod-network-test-7703 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 25 23:01:46.797: INFO: >>> kubeConfig: /root/.kube/config
I0125 23:01:46.841057       8 log.go:172] (0xc002b044d0) (0xc0018b0c80) Create stream
I0125 23:01:46.841282       8 log.go:172] (0xc002b044d0) (0xc0018b0c80) Stream added, broadcasting: 1
I0125 23:01:46.845243       8 log.go:172] (0xc002b044d0) Reply frame received for 1
I0125 23:01:46.845324       8 log.go:172] (0xc002b044d0) (0xc000a0a000) Create stream
I0125 23:01:46.845344       8 log.go:172] (0xc002b044d0) (0xc000a0a000) Stream added, broadcasting: 3
I0125 23:01:46.847698       8 log.go:172] (0xc002b044d0) Reply frame received for 3
I0125 23:01:46.847742       8 log.go:172] (0xc002b044d0) (0xc0018b0d20) Create stream
I0125 23:01:46.847749       8 log.go:172] (0xc002b044d0) (0xc0018b0d20) Stream added, broadcasting: 5
I0125 23:01:46.850519       8 log.go:172] (0xc002b044d0) Reply frame received for 5
I0125 23:01:46.928613       8 log.go:172] (0xc002b044d0) Data frame received for 3
I0125 23:01:46.928732       8 log.go:172] (0xc000a0a000) (3) Data frame handling
I0125 23:01:46.928758       8 log.go:172] (0xc000a0a000) (3) Data frame sent
I0125 23:01:46.998430       8 log.go:172] (0xc002b044d0) (0xc000a0a000) Stream removed, broadcasting: 3
I0125 23:01:46.999185       8 log.go:172] (0xc002b044d0) (0xc0018b0d20) Stream removed, broadcasting: 5
I0125 23:01:46.999410       8 log.go:172] (0xc002b044d0) Data frame received for 1
I0125 23:01:46.999695       8 log.go:172] (0xc0018b0c80) (1) Data frame handling
I0125 23:01:46.999753       8 log.go:172] (0xc0018b0c80) (1) Data frame sent
I0125 23:01:46.999894       8 log.go:172] (0xc002b044d0) (0xc0018b0c80) Stream removed, broadcasting: 1
I0125 23:01:46.999973       8 log.go:172] (0xc002b044d0) Go away received
I0125 23:01:47.000651       8 log.go:172] (0xc002b044d0) (0xc0018b0c80) Stream removed, broadcasting: 1
I0125 23:01:47.000703       8 log.go:172] (0xc002b044d0) (0xc000a0a000) Stream removed, broadcasting: 3
I0125 23:01:47.000743       8 log.go:172] (0xc002b044d0) (0xc0018b0d20) Stream removed, broadcasting: 5
Jan 25 23:01:47.000: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 23:01:47.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-7703" for this suite.

• [SLOW TEST:36.831 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":256,"skipped":4212,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
S
------------------------------
[sig-network] Services 
  should be able to create a functioning NodePort service [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 23:01:47.020: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to create a functioning NodePort service [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating service nodeport-test with type=NodePort in namespace services-8700
STEP: creating replication controller nodeport-test in namespace services-8700
I0125 23:01:48.233395       8 runners.go:189] Created replication controller with name: nodeport-test, namespace: services-8700, replica count: 2
I0125 23:01:51.284841       8 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0125 23:01:54.285585       8 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0125 23:01:57.286415       8 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0125 23:02:00.286976       8 runners.go:189] nodeport-test Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0125 23:02:03.289432       8 runners.go:189] nodeport-test Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0125 23:02:06.290120       8 runners.go:189] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jan 25 23:02:06.290: INFO: Creating new exec pod
Jan 25 23:02:15.341: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8700 execpodcz4t4 -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80'
Jan 25 23:02:15.779: INFO: stderr: "I0125 23:02:15.575912    4764 log.go:172] (0xc000a0e160) (0xc00069b9a0) Create stream\nI0125 23:02:15.576117    4764 log.go:172] (0xc000a0e160) (0xc00069b9a0) Stream added, broadcasting: 1\nI0125 23:02:15.579517    4764 log.go:172] (0xc000a0e160) Reply frame received for 1\nI0125 23:02:15.579562    4764 log.go:172] (0xc000a0e160) (0xc0008b8000) Create stream\nI0125 23:02:15.579572    4764 log.go:172] (0xc000a0e160) (0xc0008b8000) Stream added, broadcasting: 3\nI0125 23:02:15.581166    4764 log.go:172] (0xc000a0e160) Reply frame received for 3\nI0125 23:02:15.581200    4764 log.go:172] (0xc000a0e160) (0xc00095a000) Create stream\nI0125 23:02:15.581206    4764 log.go:172] (0xc000a0e160) (0xc00095a000) Stream added, broadcasting: 5\nI0125 23:02:15.583619    4764 log.go:172] (0xc000a0e160) Reply frame received for 5\nI0125 23:02:15.665743    4764 log.go:172] (0xc000a0e160) Data frame received for 5\nI0125 23:02:15.665946    4764 log.go:172] (0xc00095a000) (5) Data frame handling\nI0125 23:02:15.666002    4764 log.go:172] (0xc00095a000) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0125 23:02:15.673720    4764 log.go:172] (0xc000a0e160) Data frame received for 5\nI0125 23:02:15.673746    4764 log.go:172] (0xc00095a000) (5) Data frame handling\nI0125 23:02:15.673770    4764 log.go:172] (0xc00095a000) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0125 23:02:15.766178    4764 log.go:172] (0xc000a0e160) Data frame received for 1\nI0125 23:02:15.766374    4764 log.go:172] (0xc000a0e160) (0xc00095a000) Stream removed, broadcasting: 5\nI0125 23:02:15.766494    4764 log.go:172] (0xc00069b9a0) (1) Data frame handling\nI0125 23:02:15.766569    4764 log.go:172] (0xc00069b9a0) (1) Data frame sent\nI0125 23:02:15.766586    4764 log.go:172] (0xc000a0e160) (0xc0008b8000) Stream removed, broadcasting: 3\nI0125 23:02:15.766628    4764 log.go:172] (0xc000a0e160) (0xc00069b9a0) Stream removed, broadcasting: 1\nI0125 23:02:15.766649    4764 log.go:172] (0xc000a0e160) Go away received\nI0125 23:02:15.768326    4764 log.go:172] (0xc000a0e160) (0xc00069b9a0) Stream removed, broadcasting: 1\nI0125 23:02:15.768345    4764 log.go:172] (0xc000a0e160) (0xc0008b8000) Stream removed, broadcasting: 3\nI0125 23:02:15.768354    4764 log.go:172] (0xc000a0e160) (0xc00095a000) Stream removed, broadcasting: 5\n"
Jan 25 23:02:15.779: INFO: stdout: ""
Jan 25 23:02:15.780: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8700 execpodcz4t4 -- /bin/sh -x -c nc -zv -t -w 2 10.96.103.219 80'
Jan 25 23:02:16.171: INFO: stderr: "I0125 23:02:16.010432    4783 log.go:172] (0xc000b78fd0) (0xc000c72460) Create stream\nI0125 23:02:16.010734    4783 log.go:172] (0xc000b78fd0) (0xc000c72460) Stream added, broadcasting: 1\nI0125 23:02:16.014752    4783 log.go:172] (0xc000b78fd0) Reply frame received for 1\nI0125 23:02:16.014815    4783 log.go:172] (0xc000b78fd0) (0xc000c72500) Create stream\nI0125 23:02:16.014827    4783 log.go:172] (0xc000b78fd0) (0xc000c72500) Stream added, broadcasting: 3\nI0125 23:02:16.016117    4783 log.go:172] (0xc000b78fd0) Reply frame received for 3\nI0125 23:02:16.016173    4783 log.go:172] (0xc000b78fd0) (0xc000cc6140) Create stream\nI0125 23:02:16.016182    4783 log.go:172] (0xc000b78fd0) (0xc000cc6140) Stream added, broadcasting: 5\nI0125 23:02:16.017194    4783 log.go:172] (0xc000b78fd0) Reply frame received for 5\nI0125 23:02:16.068498    4783 log.go:172] (0xc000b78fd0) Data frame received for 5\nI0125 23:02:16.068602    4783 log.go:172] (0xc000cc6140) (5) Data frame handling\nI0125 23:02:16.068631    4783 log.go:172] (0xc000cc6140) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.103.219 80\nI0125 23:02:16.071025    4783 log.go:172] (0xc000b78fd0) Data frame received for 5\nI0125 23:02:16.071050    4783 log.go:172] (0xc000cc6140) (5) Data frame handling\nI0125 23:02:16.071068    4783 log.go:172] (0xc000cc6140) (5) Data frame sent\nConnection to 10.96.103.219 80 port [tcp/http] succeeded!\nI0125 23:02:16.161830    4783 log.go:172] (0xc000b78fd0) Data frame received for 1\nI0125 23:02:16.161985    4783 log.go:172] (0xc000b78fd0) (0xc000c72500) Stream removed, broadcasting: 3\nI0125 23:02:16.162070    4783 log.go:172] (0xc000c72460) (1) Data frame handling\nI0125 23:02:16.162117    4783 log.go:172] (0xc000c72460) (1) Data frame sent\nI0125 23:02:16.162160    4783 log.go:172] (0xc000b78fd0) (0xc000cc6140) Stream removed, broadcasting: 5\nI0125 23:02:16.162201    4783 log.go:172] (0xc000b78fd0) (0xc000c72460) Stream removed, broadcasting: 1\nI0125 23:02:16.162286    4783 log.go:172] (0xc000b78fd0) Go away received\nI0125 23:02:16.163336    4783 log.go:172] (0xc000b78fd0) (0xc000c72460) Stream removed, broadcasting: 1\nI0125 23:02:16.163351    4783 log.go:172] (0xc000b78fd0) (0xc000c72500) Stream removed, broadcasting: 3\nI0125 23:02:16.163363    4783 log.go:172] (0xc000b78fd0) (0xc000cc6140) Stream removed, broadcasting: 5\n"
Jan 25 23:02:16.172: INFO: stdout: ""
Jan 25 23:02:16.172: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8700 execpodcz4t4 -- /bin/sh -x -c nc -zv -t -w 2 10.96.2.250 32737'
Jan 25 23:02:16.756: INFO: stderr: "I0125 23:02:16.466661    4803 log.go:172] (0xc000999550) (0xc00094ca00) Create stream\nI0125 23:02:16.467145    4803 log.go:172] (0xc000999550) (0xc00094ca00) Stream added, broadcasting: 1\nI0125 23:02:16.474290    4803 log.go:172] (0xc000999550) Reply frame received for 1\nI0125 23:02:16.474640    4803 log.go:172] (0xc000999550) (0xc00092c640) Create stream\nI0125 23:02:16.474691    4803 log.go:172] (0xc000999550) (0xc00092c640) Stream added, broadcasting: 3\nI0125 23:02:16.477146    4803 log.go:172] (0xc000999550) Reply frame received for 3\nI0125 23:02:16.477209    4803 log.go:172] (0xc000999550) (0xc00094caa0) Create stream\nI0125 23:02:16.477222    4803 log.go:172] (0xc000999550) (0xc00094caa0) Stream added, broadcasting: 5\nI0125 23:02:16.479419    4803 log.go:172] (0xc000999550) Reply frame received for 5\nI0125 23:02:16.590625    4803 log.go:172] (0xc000999550) Data frame received for 5\nI0125 23:02:16.591224    4803 log.go:172] (0xc00094caa0) (5) Data frame handling\nI0125 23:02:16.591349    4803 log.go:172] (0xc00094caa0) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.2.250 32737\nI0125 23:02:16.597501    4803 log.go:172] (0xc000999550) Data frame received for 5\nI0125 23:02:16.597530    4803 log.go:172] (0xc00094caa0) (5) Data frame handling\nI0125 23:02:16.597599    4803 log.go:172] (0xc00094caa0) (5) Data frame sent\nConnection to 10.96.2.250 32737 port [tcp/32737] succeeded!\nI0125 23:02:16.727542    4803 log.go:172] (0xc000999550) (0xc00094caa0) Stream removed, broadcasting: 5\nI0125 23:02:16.727729    4803 log.go:172] (0xc000999550) Data frame received for 1\nI0125 23:02:16.729732    4803 log.go:172] (0xc000999550) (0xc00092c640) Stream removed, broadcasting: 3\nI0125 23:02:16.730855    4803 log.go:172] (0xc00094ca00) (1) Data frame handling\nI0125 23:02:16.731068    4803 log.go:172] (0xc00094ca00) (1) Data frame sent\nI0125 23:02:16.731094    4803 log.go:172] (0xc000999550) (0xc00094ca00) Stream removed, broadcasting: 1\nI0125 23:02:16.733890    4803 log.go:172] (0xc000999550) Go away received\nI0125 23:02:16.736069    4803 log.go:172] (0xc000999550) (0xc00094ca00) Stream removed, broadcasting: 1\nI0125 23:02:16.736093    4803 log.go:172] (0xc000999550) (0xc00092c640) Stream removed, broadcasting: 3\nI0125 23:02:16.736120    4803 log.go:172] (0xc000999550) (0xc00094caa0) Stream removed, broadcasting: 5\n"
Jan 25 23:02:16.757: INFO: stdout: ""
Jan 25 23:02:16.758: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8700 execpodcz4t4 -- /bin/sh -x -c nc -zv -t -w 2 10.96.1.234 32737'
Jan 25 23:02:17.268: INFO: stderr: "I0125 23:02:17.051962    4819 log.go:172] (0xc00061d3f0) (0xc0009dc1e0) Create stream\nI0125 23:02:17.052330    4819 log.go:172] (0xc00061d3f0) (0xc0009dc1e0) Stream added, broadcasting: 1\nI0125 23:02:17.060284    4819 log.go:172] (0xc00061d3f0) Reply frame received for 1\nI0125 23:02:17.060440    4819 log.go:172] (0xc00061d3f0) (0xc0009da000) Create stream\nI0125 23:02:17.060464    4819 log.go:172] (0xc00061d3f0) (0xc0009da000) Stream added, broadcasting: 3\nI0125 23:02:17.063160    4819 log.go:172] (0xc00061d3f0) Reply frame received for 3\nI0125 23:02:17.063203    4819 log.go:172] (0xc00061d3f0) (0xc0008a8000) Create stream\nI0125 23:02:17.063213    4819 log.go:172] (0xc00061d3f0) (0xc0008a8000) Stream added, broadcasting: 5\nI0125 23:02:17.070954    4819 log.go:172] (0xc00061d3f0) Reply frame received for 5\nI0125 23:02:17.160945    4819 log.go:172] (0xc00061d3f0) Data frame received for 5\nI0125 23:02:17.161072    4819 log.go:172] (0xc0008a8000) (5) Data frame handling\nI0125 23:02:17.161106    4819 log.go:172] (0xc0008a8000) (5) Data frame sent\nI0125 23:02:17.161119    4819 log.go:172] (0xc00061d3f0) Data frame received for 5\nI0125 23:02:17.161127    4819 log.go:172] (0xc0008a8000) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.1.234 32737\nI0125 23:02:17.161192    4819 log.go:172] (0xc0008a8000) (5) Data frame sent\nI0125 23:02:17.164911    4819 log.go:172] (0xc00061d3f0) Data frame received for 5\nI0125 23:02:17.164929    4819 log.go:172] (0xc0008a8000) (5) Data frame handling\nI0125 23:02:17.164939    4819 log.go:172] (0xc0008a8000) (5) Data frame sent\nConnection to 10.96.1.234 32737 port [tcp/32737] succeeded!\nI0125 23:02:17.253786    4819 log.go:172] (0xc00061d3f0) (0xc0008a8000) Stream removed, broadcasting: 5\nI0125 23:02:17.253959    4819 log.go:172] (0xc00061d3f0) Data frame received for 1\nI0125 23:02:17.253989    4819 log.go:172] (0xc00061d3f0) (0xc0009da000) Stream removed, broadcasting: 3\nI0125 23:02:17.254049    4819 log.go:172] (0xc0009dc1e0) (1) Data frame handling\nI0125 23:02:17.254078    4819 log.go:172] (0xc0009dc1e0) (1) Data frame sent\nI0125 23:02:17.254093    4819 log.go:172] (0xc00061d3f0) (0xc0009dc1e0) Stream removed, broadcasting: 1\nI0125 23:02:17.254128    4819 log.go:172] (0xc00061d3f0) Go away received\nI0125 23:02:17.255481    4819 log.go:172] (0xc00061d3f0) (0xc0009dc1e0) Stream removed, broadcasting: 1\nI0125 23:02:17.255498    4819 log.go:172] (0xc00061d3f0) (0xc0009da000) Stream removed, broadcasting: 3\nI0125 23:02:17.255506    4819 log.go:172] (0xc00061d3f0) (0xc0008a8000) Stream removed, broadcasting: 5\n"
Jan 25 23:02:17.269: INFO: stdout: ""
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 23:02:17.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-8700" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:30.262 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to create a functioning NodePort service [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":278,"completed":257,"skipped":4213,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
[k8s.io] Lease 
  lease API should be available [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Lease
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 23:02:17.283: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename lease-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] lease API should be available [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Lease
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 23:02:17.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "lease-test-5299" for this suite.
•{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":278,"completed":258,"skipped":4213,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 23:02:17.436: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1877
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Jan 25 23:02:17.483: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-1472'
Jan 25 23:02:17.593: INFO: stderr: ""
Jan 25 23:02:17.593: INFO: stdout: "pod/e2e-test-httpd-pod created\n"
STEP: verifying the pod e2e-test-httpd-pod is running
STEP: verifying the pod e2e-test-httpd-pod was created
Jan 25 23:02:27.644: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-1472 -o json'
Jan 25 23:02:27.828: INFO: stderr: ""
Jan 25 23:02:27.829: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-01-25T23:02:17Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-httpd-pod\"\n        },\n        \"name\": \"e2e-test-httpd-pod\",\n        \"namespace\": \"kubectl-1472\",\n        \"resourceVersion\": \"4349918\",\n        \"selfLink\": \"/api/v1/namespaces/kubectl-1472/pods/e2e-test-httpd-pod\",\n        \"uid\": \"f9167946-e8b7-4f0c-bd5f-f90857ee69f0\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-httpd-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-9bjxw\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"jerma-node\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-9bjxw\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-9bjxw\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-25T23:02:17Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-25T23:02:25Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-25T23:02:25Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-25T23:02:17Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"docker://3f03f98801ff2fe0e037ebe75c01ccbcf34e2732495d1d5cc580381f47f791d4\",\n                \"image\": \"httpd:2.4.38-alpine\",\n                \"imageID\": \"docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-httpd-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"started\": true,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-01-25T23:02:23Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"10.96.2.250\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.44.0.2\",\n        \"podIPs\": [\n            {\n                \"ip\": \"10.44.0.2\"\n            }\n        ],\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-01-25T23:02:17Z\"\n    }\n}\n"
STEP: replace the image in the pod
Jan 25 23:02:27.829: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-1472'
Jan 25 23:02:28.219: INFO: stderr: ""
Jan 25 23:02:28.219: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n"
STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29
[AfterEach] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1882
Jan 25 23:02:28.341: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-1472'
Jan 25 23:02:34.963: INFO: stderr: ""
Jan 25 23:02:34.964: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 23:02:34.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1472" for this suite.

• [SLOW TEST:17.562 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1873
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image  [Conformance]","total":278,"completed":259,"skipped":4230,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 23:02:34.999: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Jan 25 23:02:35.144: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan 25 23:02:35.159: INFO: Waiting for terminating namespaces to be deleted...
Jan 25 23:02:35.162: INFO: 
Logging pods the kubelet thinks is on node jerma-node before test
Jan 25 23:02:35.187: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded)
Jan 25 23:02:35.188: INFO: 	Container weave ready: true, restart count 1
Jan 25 23:02:35.188: INFO: 	Container weave-npc ready: true, restart count 0
Jan 25 23:02:35.188: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded)
Jan 25 23:02:35.188: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 25 23:02:35.188: INFO: 
Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test
Jan 25 23:02:35.213: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Jan 25 23:02:35.213: INFO: 	Container kube-scheduler ready: true, restart count 4
Jan 25 23:02:35.213: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Jan 25 23:02:35.213: INFO: 	Container etcd ready: true, restart count 1
Jan 25 23:02:35.213: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Jan 25 23:02:35.213: INFO: 	Container kube-apiserver ready: true, restart count 1
Jan 25 23:02:35.213: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Jan 25 23:02:35.213: INFO: 	Container coredns ready: true, restart count 0
Jan 25 23:02:35.213: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Jan 25 23:02:35.213: INFO: 	Container coredns ready: true, restart count 0
Jan 25 23:02:35.213: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded)
Jan 25 23:02:35.213: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 25 23:02:35.213: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded)
Jan 25 23:02:35.213: INFO: 	Container weave ready: true, restart count 0
Jan 25 23:02:35.213: INFO: 	Container weave-npc ready: true, restart count 0
Jan 25 23:02:35.213: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Jan 25 23:02:35.213: INFO: 	Container kube-controller-manager ready: true, restart count 3
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.15ed4395de2b1403], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 23:02:36.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-3096" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77
•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","total":278,"completed":260,"skipped":4245,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 23:02:36.274: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 25 23:02:36.433: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace
STEP: Creating rc "condition-test" that asks for more than the allowed pod quota
STEP: Checking rc "condition-test" has the desired failure condition set
STEP: Scaling down rc "condition-test" to satisfy pod quota
Jan 25 23:02:38.501: INFO: Updating replication controller "condition-test"
STEP: Checking rc "condition-test" has no failure condition set
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 23:02:38.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-1818" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":278,"completed":261,"skipped":4312,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 23:02:39.267: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 25 23:02:52.504: INFO: Waiting up to 5m0s for pod "client-envvars-081877cf-b20c-471f-88f4-8fdb4b262501" in namespace "pods-7173" to be "success or failure"
Jan 25 23:02:52.614: INFO: Pod "client-envvars-081877cf-b20c-471f-88f4-8fdb4b262501": Phase="Pending", Reason="", readiness=false. Elapsed: 109.204512ms
Jan 25 23:02:54.622: INFO: Pod "client-envvars-081877cf-b20c-471f-88f4-8fdb4b262501": Phase="Pending", Reason="", readiness=false. Elapsed: 2.117182609s
Jan 25 23:02:56.631: INFO: Pod "client-envvars-081877cf-b20c-471f-88f4-8fdb4b262501": Phase="Pending", Reason="", readiness=false. Elapsed: 4.125824788s
Jan 25 23:02:58.644: INFO: Pod "client-envvars-081877cf-b20c-471f-88f4-8fdb4b262501": Phase="Pending", Reason="", readiness=false. Elapsed: 6.139188245s
Jan 25 23:03:00.653: INFO: Pod "client-envvars-081877cf-b20c-471f-88f4-8fdb4b262501": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.14781557s
STEP: Saw pod success
Jan 25 23:03:00.653: INFO: Pod "client-envvars-081877cf-b20c-471f-88f4-8fdb4b262501" satisfied condition "success or failure"
Jan 25 23:03:00.658: INFO: Trying to get logs from node jerma-node pod client-envvars-081877cf-b20c-471f-88f4-8fdb4b262501 container env3cont: 
STEP: delete the pod
Jan 25 23:03:00.736: INFO: Waiting for pod client-envvars-081877cf-b20c-471f-88f4-8fdb4b262501 to disappear
Jan 25 23:03:00.742: INFO: Pod client-envvars-081877cf-b20c-471f-88f4-8fdb4b262501 no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 23:03:00.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7173" for this suite.

• [SLOW TEST:21.487 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":278,"completed":262,"skipped":4345,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 23:03:00.755: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:329
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a replication controller
Jan 25 23:03:00.893: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3467'
Jan 25 23:03:01.359: INFO: stderr: ""
Jan 25 23:03:01.359: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 25 23:03:01.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3467'
Jan 25 23:03:01.555: INFO: stderr: ""
Jan 25 23:03:01.556: INFO: stdout: "update-demo-nautilus-c2djm update-demo-nautilus-pd9vh "
Jan 25 23:03:01.556: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-c2djm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3467'
Jan 25 23:03:01.682: INFO: stderr: ""
Jan 25 23:03:01.682: INFO: stdout: ""
Jan 25 23:03:01.682: INFO: update-demo-nautilus-c2djm is created but not running
Jan 25 23:03:06.684: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3467'
Jan 25 23:03:08.109: INFO: stderr: ""
Jan 25 23:03:08.109: INFO: stdout: "update-demo-nautilus-c2djm update-demo-nautilus-pd9vh "
Jan 25 23:03:08.110: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-c2djm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3467'
Jan 25 23:03:08.501: INFO: stderr: ""
Jan 25 23:03:08.502: INFO: stdout: ""
Jan 25 23:03:08.502: INFO: update-demo-nautilus-c2djm is created but not running
Jan 25 23:03:13.502: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3467'
Jan 25 23:03:13.703: INFO: stderr: ""
Jan 25 23:03:13.703: INFO: stdout: "update-demo-nautilus-c2djm update-demo-nautilus-pd9vh "
Jan 25 23:03:13.704: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-c2djm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3467'
Jan 25 23:03:13.883: INFO: stderr: ""
Jan 25 23:03:13.884: INFO: stdout: "true"
Jan 25 23:03:13.884: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-c2djm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3467'
Jan 25 23:03:14.023: INFO: stderr: ""
Jan 25 23:03:14.024: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 25 23:03:14.024: INFO: validating pod update-demo-nautilus-c2djm
Jan 25 23:03:14.036: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 25 23:03:14.036: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 25 23:03:14.036: INFO: update-demo-nautilus-c2djm is verified up and running
Jan 25 23:03:14.036: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pd9vh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3467'
Jan 25 23:03:14.232: INFO: stderr: ""
Jan 25 23:03:14.232: INFO: stdout: "true"
Jan 25 23:03:14.232: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pd9vh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3467'
Jan 25 23:03:14.332: INFO: stderr: ""
Jan 25 23:03:14.333: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 25 23:03:14.333: INFO: validating pod update-demo-nautilus-pd9vh
Jan 25 23:03:14.339: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 25 23:03:14.339: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 25 23:03:14.339: INFO: update-demo-nautilus-pd9vh is verified up and running
STEP: using delete to clean up resources
Jan 25 23:03:14.339: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3467'
Jan 25 23:03:14.457: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 25 23:03:14.458: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Jan 25 23:03:14.458: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-3467'
Jan 25 23:03:14.606: INFO: stderr: "No resources found in kubectl-3467 namespace.\n"
Jan 25 23:03:14.606: INFO: stdout: ""
Jan 25 23:03:14.607: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-3467 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan 25 23:03:14.752: INFO: stderr: ""
Jan 25 23:03:14.753: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 23:03:14.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3467" for this suite.

• [SLOW TEST:14.057 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:327
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","total":278,"completed":263,"skipped":4369,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 23:03:14.813: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1713
[It] should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Jan 25 23:03:14.895: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --generator=deployment/apps.v1 --namespace=kubectl-637'
Jan 25 23:03:15.040: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 25 23:03:15.040: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n"
STEP: verifying the deployment e2e-test-httpd-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-httpd-deployment was created
[AfterEach] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1718
Jan 25 23:03:20.870: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-637'
Jan 25 23:03:21.177: INFO: stderr: ""
Jan 25 23:03:21.177: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 23:03:21.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-637" for this suite.

• [SLOW TEST:6.403 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1709
    should create a deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image  [Conformance]","total":278,"completed":264,"skipped":4383,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-api-machinery] Servers with support for Table transformation 
  should return a 406 for a backend which does not implement metadata [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 23:03:21.218: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename tables
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:46
[It] should return a 406 for a backend which does not implement metadata [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [sig-api-machinery] Servers with support for Table transformation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 23:03:21.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-3216" for this suite.
•{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":278,"completed":265,"skipped":4392,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 23:03:21.456: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jan 25 23:03:21.579: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6a17be2e-64ef-4ac2-b0f7-2fae3a34b443" in namespace "downward-api-7178" to be "success or failure"
Jan 25 23:03:21.583: INFO: Pod "downwardapi-volume-6a17be2e-64ef-4ac2-b0f7-2fae3a34b443": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048145ms
Jan 25 23:03:23.591: INFO: Pod "downwardapi-volume-6a17be2e-64ef-4ac2-b0f7-2fae3a34b443": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012770733s
Jan 25 23:03:25.622: INFO: Pod "downwardapi-volume-6a17be2e-64ef-4ac2-b0f7-2fae3a34b443": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042902213s
Jan 25 23:03:27.730: INFO: Pod "downwardapi-volume-6a17be2e-64ef-4ac2-b0f7-2fae3a34b443": Phase="Pending", Reason="", readiness=false. Elapsed: 6.151046455s
Jan 25 23:03:29.738: INFO: Pod "downwardapi-volume-6a17be2e-64ef-4ac2-b0f7-2fae3a34b443": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.159521514s
STEP: Saw pod success
Jan 25 23:03:29.739: INFO: Pod "downwardapi-volume-6a17be2e-64ef-4ac2-b0f7-2fae3a34b443" satisfied condition "success or failure"
Jan 25 23:03:29.743: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-6a17be2e-64ef-4ac2-b0f7-2fae3a34b443 container client-container: 
STEP: delete the pod
Jan 25 23:03:29.870: INFO: Waiting for pod downwardapi-volume-6a17be2e-64ef-4ac2-b0f7-2fae3a34b443 to disappear
Jan 25 23:03:29.885: INFO: Pod downwardapi-volume-6a17be2e-64ef-4ac2-b0f7-2fae3a34b443 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 23:03:29.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7178" for this suite.

• [SLOW TEST:8.453 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":266,"skipped":4401,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert from CR v1 to CR v2 [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 23:03:29.912: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125
STEP: Setting up server cert
STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication
STEP: Deploying the custom resource conversion webhook pod
STEP: Wait for the deployment to be ready
Jan 25 23:03:30.667: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set
Jan 25 23:03:32.679: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715590210, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715590210, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715590210, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715590210, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 23:03:34.684: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715590210, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715590210, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715590210, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715590210, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 23:03:36.685: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715590210, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715590210, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715590210, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715590210, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 25 23:03:39.791: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1
[It] should be able to convert from CR v1 to CR v2 [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 25 23:03:39.805: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating a v1 custom resource
STEP: v2 custom resource should be converted
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 23:03:41.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-9904" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136

• [SLOW TEST:11.309 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert from CR v1 to CR v2 [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":278,"completed":267,"skipped":4401,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 23:03:41.220: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-projected-all-test-volume-386f754e-80ea-4e2d-835e-afa70183a04a
STEP: Creating secret with name secret-projected-all-test-volume-a48150b7-ed93-4083-b5a5-7ed6c216e9a6
STEP: Creating a pod to test Check all projections for projected volume plugin
Jan 25 23:03:41.299: INFO: Waiting up to 5m0s for pod "projected-volume-882ae2ba-d56d-4c70-88da-c0eada00389f" in namespace "projected-2560" to be "success or failure"
Jan 25 23:03:41.313: INFO: Pod "projected-volume-882ae2ba-d56d-4c70-88da-c0eada00389f": Phase="Pending", Reason="", readiness=false. Elapsed: 13.762403ms
Jan 25 23:03:43.321: INFO: Pod "projected-volume-882ae2ba-d56d-4c70-88da-c0eada00389f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021996902s
Jan 25 23:03:45.330: INFO: Pod "projected-volume-882ae2ba-d56d-4c70-88da-c0eada00389f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030947143s
Jan 25 23:03:47.340: INFO: Pod "projected-volume-882ae2ba-d56d-4c70-88da-c0eada00389f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.040571368s
Jan 25 23:03:49.348: INFO: Pod "projected-volume-882ae2ba-d56d-4c70-88da-c0eada00389f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.049051716s
STEP: Saw pod success
Jan 25 23:03:49.348: INFO: Pod "projected-volume-882ae2ba-d56d-4c70-88da-c0eada00389f" satisfied condition "success or failure"
Jan 25 23:03:49.353: INFO: Trying to get logs from node jerma-node pod projected-volume-882ae2ba-d56d-4c70-88da-c0eada00389f container projected-all-volume-test: 
STEP: delete the pod
Jan 25 23:03:49.553: INFO: Waiting for pod projected-volume-882ae2ba-d56d-4c70-88da-c0eada00389f to disappear
Jan 25 23:03:49.663: INFO: Pod projected-volume-882ae2ba-d56d-4c70-88da-c0eada00389f no longer exists
[AfterEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 23:03:49.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2560" for this suite.

• [SLOW TEST:8.458 seconds]
[sig-storage] Projected combined
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":278,"completed":268,"skipped":4404,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 23:03:49.679: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jan 25 23:03:49.824: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9f989525-193c-4fbb-b0c1-ca032f107875" in namespace "downward-api-9527" to be "success or failure"
Jan 25 23:03:49.832: INFO: Pod "downwardapi-volume-9f989525-193c-4fbb-b0c1-ca032f107875": Phase="Pending", Reason="", readiness=false. Elapsed: 7.925458ms
Jan 25 23:03:51.853: INFO: Pod "downwardapi-volume-9f989525-193c-4fbb-b0c1-ca032f107875": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028163799s
Jan 25 23:03:53.865: INFO: Pod "downwardapi-volume-9f989525-193c-4fbb-b0c1-ca032f107875": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040123034s
Jan 25 23:03:55.884: INFO: Pod "downwardapi-volume-9f989525-193c-4fbb-b0c1-ca032f107875": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.059273122s
STEP: Saw pod success
Jan 25 23:03:55.884: INFO: Pod "downwardapi-volume-9f989525-193c-4fbb-b0c1-ca032f107875" satisfied condition "success or failure"
Jan 25 23:03:55.890: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-9f989525-193c-4fbb-b0c1-ca032f107875 container client-container: 
STEP: delete the pod
Jan 25 23:03:55.976: INFO: Waiting for pod downwardapi-volume-9f989525-193c-4fbb-b0c1-ca032f107875 to disappear
Jan 25 23:03:56.065: INFO: Pod downwardapi-volume-9f989525-193c-4fbb-b0c1-ca032f107875 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 23:03:56.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9527" for this suite.

• [SLOW TEST:6.400 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":278,"completed":269,"skipped":4406,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 23:03:56.080: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Jan 25 23:04:05.286: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 23:04:05.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-4949" for this suite.

• [SLOW TEST:9.343 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":278,"completed":270,"skipped":4411,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 23:04:05.423: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jan 25 23:04:05.684: INFO: Waiting up to 5m0s for pod "downwardapi-volume-52088797-c89b-4a66-a04f-9ee1a1759775" in namespace "downward-api-2807" to be "success or failure"
Jan 25 23:04:05.776: INFO: Pod "downwardapi-volume-52088797-c89b-4a66-a04f-9ee1a1759775": Phase="Pending", Reason="", readiness=false. Elapsed: 92.33267ms
Jan 25 23:04:07.787: INFO: Pod "downwardapi-volume-52088797-c89b-4a66-a04f-9ee1a1759775": Phase="Pending", Reason="", readiness=false. Elapsed: 2.103096234s
Jan 25 23:04:09.797: INFO: Pod "downwardapi-volume-52088797-c89b-4a66-a04f-9ee1a1759775": Phase="Pending", Reason="", readiness=false. Elapsed: 4.112825027s
Jan 25 23:04:11.807: INFO: Pod "downwardapi-volume-52088797-c89b-4a66-a04f-9ee1a1759775": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.123297363s
STEP: Saw pod success
Jan 25 23:04:11.807: INFO: Pod "downwardapi-volume-52088797-c89b-4a66-a04f-9ee1a1759775" satisfied condition "success or failure"
Jan 25 23:04:11.814: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-52088797-c89b-4a66-a04f-9ee1a1759775 container client-container: 
STEP: delete the pod
Jan 25 23:04:11.873: INFO: Waiting for pod downwardapi-volume-52088797-c89b-4a66-a04f-9ee1a1759775 to disappear
Jan 25 23:04:11.880: INFO: Pod downwardapi-volume-52088797-c89b-4a66-a04f-9ee1a1759775 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 23:04:11.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2807" for this suite.

• [SLOW TEST:6.480 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":271,"skipped":4418,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 23:04:11.904: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test override command
Jan 25 23:04:12.036: INFO: Waiting up to 5m0s for pod "client-containers-f0875314-1c27-49f0-8125-86e3ed38ed64" in namespace "containers-4469" to be "success or failure"
Jan 25 23:04:12.057: INFO: Pod "client-containers-f0875314-1c27-49f0-8125-86e3ed38ed64": Phase="Pending", Reason="", readiness=false. Elapsed: 20.098122ms
Jan 25 23:04:14.067: INFO: Pod "client-containers-f0875314-1c27-49f0-8125-86e3ed38ed64": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030201244s
Jan 25 23:04:16.073: INFO: Pod "client-containers-f0875314-1c27-49f0-8125-86e3ed38ed64": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036137326s
Jan 25 23:04:18.081: INFO: Pod "client-containers-f0875314-1c27-49f0-8125-86e3ed38ed64": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044708595s
Jan 25 23:04:20.091: INFO: Pod "client-containers-f0875314-1c27-49f0-8125-86e3ed38ed64": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.054475404s
STEP: Saw pod success
Jan 25 23:04:20.091: INFO: Pod "client-containers-f0875314-1c27-49f0-8125-86e3ed38ed64" satisfied condition "success or failure"
Jan 25 23:04:20.096: INFO: Trying to get logs from node jerma-node pod client-containers-f0875314-1c27-49f0-8125-86e3ed38ed64 container test-container: 
STEP: delete the pod
Jan 25 23:04:20.263: INFO: Waiting for pod client-containers-f0875314-1c27-49f0-8125-86e3ed38ed64 to disappear
Jan 25 23:04:20.711: INFO: Pod client-containers-f0875314-1c27-49f0-8125-86e3ed38ed64 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 23:04:20.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-4469" for this suite.

• [SLOW TEST:8.825 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":278,"completed":272,"skipped":4430,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 23:04:20.730: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[It] should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: executing a command with run --rm and attach with stdin
Jan 25 23:04:20.871: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1734 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Jan 25 23:04:26.780: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0125 23:04:25.643200    5206 log.go:172] (0xc0009ba0b0) (0xc0007e0140) Create stream\nI0125 23:04:25.643562    5206 log.go:172] (0xc0009ba0b0) (0xc0007e0140) Stream added, broadcasting: 1\nI0125 23:04:25.650217    5206 log.go:172] (0xc0009ba0b0) Reply frame received for 1\nI0125 23:04:25.650261    5206 log.go:172] (0xc0009ba0b0) (0xc0007e01e0) Create stream\nI0125 23:04:25.650273    5206 log.go:172] (0xc0009ba0b0) (0xc0007e01e0) Stream added, broadcasting: 3\nI0125 23:04:25.651948    5206 log.go:172] (0xc0009ba0b0) Reply frame received for 3\nI0125 23:04:25.652009    5206 log.go:172] (0xc0009ba0b0) (0xc0007e6000) Create stream\nI0125 23:04:25.652020    5206 log.go:172] (0xc0009ba0b0) (0xc0007e6000) Stream added, broadcasting: 5\nI0125 23:04:25.653703    5206 log.go:172] (0xc0009ba0b0) Reply frame received for 5\nI0125 23:04:25.653741    5206 log.go:172] (0xc0009ba0b0) (0xc000777ae0) Create stream\nI0125 23:04:25.653754    5206 log.go:172] (0xc0009ba0b0) (0xc000777ae0) Stream added, broadcasting: 7\nI0125 23:04:25.656149    5206 log.go:172] (0xc0009ba0b0) Reply frame received for 7\nI0125 23:04:25.656633    5206 log.go:172] (0xc0007e01e0) (3) Writing data frame\nI0125 23:04:25.656846    5206 log.go:172] (0xc0007e01e0) (3) Writing data frame\nI0125 23:04:25.663128    5206 log.go:172] (0xc0009ba0b0) Data frame received for 5\nI0125 23:04:25.663162    5206 log.go:172] (0xc0007e6000) (5) Data frame handling\nI0125 23:04:25.663305    5206 log.go:172] (0xc0007e6000) (5) Data frame sent\nI0125 23:04:25.664250    5206 log.go:172] (0xc0009ba0b0) Data frame received for 5\nI0125 23:04:25.664269    5206 log.go:172] (0xc0007e6000) (5) Data frame handling\nI0125 23:04:25.664281    5206 log.go:172] (0xc0007e6000) (5) Data frame sent\nI0125 23:04:26.635058    5206 log.go:172] (0xc0009ba0b0) (0xc0007e01e0) Stream removed, broadcasting: 3\nI0125 23:04:26.635655    5206 log.go:172] (0xc0009ba0b0) Data frame received for 1\nI0125 23:04:26.635708    5206 log.go:172] (0xc0007e0140) (1) Data frame handling\nI0125 23:04:26.635787    5206 log.go:172] (0xc0007e0140) (1) Data frame sent\nI0125 23:04:26.635946    5206 log.go:172] (0xc0009ba0b0) (0xc0007e0140) Stream removed, broadcasting: 1\nI0125 23:04:26.636637    5206 log.go:172] (0xc0009ba0b0) (0xc0007e6000) Stream removed, broadcasting: 5\nI0125 23:04:26.637606    5206 log.go:172] (0xc0009ba0b0) (0xc000777ae0) Stream removed, broadcasting: 7\nI0125 23:04:26.637699    5206 log.go:172] (0xc0009ba0b0) Go away received\nI0125 23:04:26.638707    5206 log.go:172] (0xc0009ba0b0) (0xc0007e0140) Stream removed, broadcasting: 1\nI0125 23:04:26.638747    5206 log.go:172] (0xc0009ba0b0) (0xc0007e01e0) Stream removed, broadcasting: 3\nI0125 23:04:26.638766    5206 log.go:172] (0xc0009ba0b0) (0xc0007e6000) Stream removed, broadcasting: 5\nI0125 23:04:26.638791    5206 log.go:172] (0xc0009ba0b0) (0xc000777ae0) Stream removed, broadcasting: 7\n"
Jan 25 23:04:26.780: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 23:04:28.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1734" for this suite.

• [SLOW TEST:8.069 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run --rm job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1924
    should create a job from an image, then delete the job  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job  [Conformance]","total":278,"completed":273,"skipped":4439,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with pruning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 23:04:28.800: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 25 23:04:29.473: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 25 23:04:31.492: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715590269, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715590269, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715590269, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715590269, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 23:04:33.500: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715590269, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715590269, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715590269, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715590269, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 23:04:35.500: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715590269, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715590269, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715590269, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715590269, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 25 23:04:38.547: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource with pruning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 25 23:04:38.562: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-984-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource that should be mutated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 23:04:39.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3349" for this suite.
STEP: Destroying namespace "webhook-3349-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:11.254 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with pruning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":278,"completed":274,"skipped":4452,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
S
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group but different versions [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 23:04:40.055: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of same group but different versions [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation
Jan 25 23:04:40.201: INFO: >>> kubeConfig: /root/.kube/config
STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation
Jan 25 23:04:53.108: INFO: >>> kubeConfig: /root/.kube/config
Jan 25 23:04:55.117: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 23:05:09.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-1422" for this suite.

• [SLOW TEST:29.208 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group but different versions [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":278,"completed":275,"skipped":4453,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  listing custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 23:05:09.263: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] listing custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 25 23:05:09.337: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 23:05:14.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-953" for this suite.

• [SLOW TEST:5.144 seconds]
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:47
    listing custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","total":278,"completed":276,"skipped":4453,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 25 23:05:14.409: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-ca233177-99b0-4c42-90a0-a6ec6b15f4d6
STEP: Creating a pod to test consume secrets
Jan 25 23:05:14.825: INFO: Waiting up to 5m0s for pod "pod-secrets-69b0d8f6-23e2-4368-b754-47bae1b7caaf" in namespace "secrets-2175" to be "success or failure"
Jan 25 23:05:14.828: INFO: Pod "pod-secrets-69b0d8f6-23e2-4368-b754-47bae1b7caaf": Phase="Pending", Reason="", readiness=false. Elapsed: 3.358113ms
Jan 25 23:05:16.838: INFO: Pod "pod-secrets-69b0d8f6-23e2-4368-b754-47bae1b7caaf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012988378s
Jan 25 23:05:18.848: INFO: Pod "pod-secrets-69b0d8f6-23e2-4368-b754-47bae1b7caaf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022701472s
Jan 25 23:05:20.966: INFO: Pod "pod-secrets-69b0d8f6-23e2-4368-b754-47bae1b7caaf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.141228039s
STEP: Saw pod success
Jan 25 23:05:20.967: INFO: Pod "pod-secrets-69b0d8f6-23e2-4368-b754-47bae1b7caaf" satisfied condition "success or failure"
Jan 25 23:05:21.040: INFO: Trying to get logs from node jerma-node pod pod-secrets-69b0d8f6-23e2-4368-b754-47bae1b7caaf container secret-volume-test: 
STEP: delete the pod
Jan 25 23:05:21.247: INFO: Waiting for pod pod-secrets-69b0d8f6-23e2-4368-b754-47bae1b7caaf to disappear
Jan 25 23:05:21.263: INFO: Pod pod-secrets-69b0d8f6-23e2-4368-b754-47bae1b7caaf no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 25 23:05:21.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2175" for this suite.
STEP: Destroying namespace "secret-namespace-8017" for this suite.

• [SLOW TEST:6.903 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":278,"completed":277,"skipped":4524,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSJan 25 23:05:21.313: INFO: Running AfterSuite actions on all nodes
Jan 25 23:05:21.313: INFO: Running AfterSuite actions on node 1
Jan 25 23:05:21.313: INFO: Skipping dumping logs from cluster
{"msg":"Test Suite completed","total":278,"completed":277,"skipped":4536,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}


Summarizing 1 Failure:

[Fail] [sig-cli] Kubectl client Guestbook application [It] should create and stop a working application  [Conformance] 
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:2315

Ran 278 of 4814 Specs in 6971.405 seconds
FAIL! -- 277 Passed | 1 Failed | 0 Pending | 4536 Skipped
--- FAIL: TestE2E (6971.50s)
FAIL