I0826 22:44:44.741822 6 test_context.go:419] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0826 22:44:44.742015 6 e2e.go:109] Starting e2e run "e62b3103-4f45-45d8-a479-4e9a2dda1ead" on Ginkgo node 1 {"msg":"Test Suite starting","total":278,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1598481883 - Will randomize all specs Will run 278 of 4844 specs Aug 26 22:44:44.800: INFO: >>> kubeConfig: /root/.kube/config Aug 26 22:44:44.805: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Aug 26 22:44:44.827: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Aug 26 22:44:44.858: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Aug 26 22:44:44.858: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Aug 26 22:44:44.858: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Aug 26 22:44:44.863: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Aug 26 22:44:44.863: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Aug 26 22:44:44.863: INFO: e2e test version: v1.17.11 Aug 26 22:44:44.865: INFO: kube-apiserver version: v1.17.5 Aug 26 22:44:44.865: INFO: >>> kubeConfig: /root/.kube/config Aug 26 22:44:44.868: INFO: Cluster IP family: ipv4 SSSSSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 26 22:44:44.868: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services Aug 26 22:44:44.975: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-4358 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-4358 STEP: creating replication controller externalsvc in namespace services-4358 I0826 22:44:45.349857 6 runners.go:189] Created replication controller with name: externalsvc, namespace: services-4358, replica count: 2 I0826 22:44:48.400271 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0826 22:44:51.400490 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0826 22:44:54.400688 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Aug 26 22:44:54.556: INFO: Creating new exec pod Aug 26 22:44:58.600: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-4358 execpod4hpmv -- /bin/sh -x -c nslookup clusterip-service' Aug 26 22:45:02.403: INFO: stderr: "I0826 22:45:02.311454 28 log.go:172] (0xc0000f6fd0) (0xc0002d57c0) Create stream\nI0826 22:45:02.311524 28 log.go:172] (0xc0000f6fd0) (0xc0002d57c0) Stream added, broadcasting: 1\nI0826 22:45:02.314389 28 log.go:172] (0xc0000f6fd0) Reply frame received for 1\nI0826 22:45:02.314441 28 log.go:172] (0xc0000f6fd0) (0xc00069e000) Create stream\nI0826 22:45:02.314457 28 log.go:172] (0xc0000f6fd0) (0xc00069e000) Stream added, broadcasting: 3\nI0826 22:45:02.315601 28 log.go:172] (0xc0000f6fd0) Reply frame received for 3\nI0826 22:45:02.315645 28 log.go:172] (0xc0000f6fd0) (0xc000706000) Create stream\nI0826 22:45:02.315663 28 log.go:172] (0xc0000f6fd0) (0xc000706000) Stream added, broadcasting: 5\nI0826 22:45:02.316815 28 log.go:172] (0xc0000f6fd0) Reply frame received for 5\nI0826 22:45:02.387122 28 log.go:172] (0xc0000f6fd0) Data frame received for 5\nI0826 22:45:02.387147 28 log.go:172] (0xc000706000) (5) Data frame handling\nI0826 22:45:02.387160 28 log.go:172] (0xc000706000) (5) Data frame sent\n+ nslookup clusterip-service\nI0826 22:45:02.392934 28 log.go:172] (0xc0000f6fd0) Data frame received for 3\nI0826 22:45:02.392951 28 log.go:172] (0xc00069e000) (3) Data frame handling\nI0826 22:45:02.392959 28 log.go:172] (0xc00069e000) (3) Data frame sent\nI0826 22:45:02.393994 28 log.go:172] (0xc0000f6fd0) Data frame received for 3\nI0826 22:45:02.394010 28 log.go:172] (0xc00069e000) (3) Data frame handling\nI0826 22:45:02.394023 28 log.go:172] (0xc00069e000) (3) Data frame sent\nI0826 22:45:02.394511 28 log.go:172] (0xc0000f6fd0) Data frame received for 5\nI0826 22:45:02.394549 28 log.go:172] (0xc000706000) (5) Data frame handling\nI0826 22:45:02.394574 28 log.go:172] (0xc0000f6fd0) Data frame received for 3\nI0826 22:45:02.394596 28 log.go:172] (0xc00069e000) (3) Data frame handling\nI0826 22:45:02.396243 28 log.go:172] (0xc0000f6fd0) Data frame received for 1\nI0826 22:45:02.396262 28 log.go:172] (0xc0002d57c0) (1) Data frame handling\nI0826 22:45:02.396277 28 log.go:172] (0xc0002d57c0) (1) Data frame sent\nI0826 22:45:02.396286 28 log.go:172] (0xc0000f6fd0) (0xc0002d57c0) Stream removed, broadcasting: 1\nI0826 22:45:02.396295 28 log.go:172] (0xc0000f6fd0) Go away received\nI0826 22:45:02.396714 28 log.go:172] (0xc0000f6fd0) (0xc0002d57c0) Stream removed, broadcasting: 1\nI0826 22:45:02.396841 28 log.go:172] (0xc0000f6fd0) (0xc00069e000) Stream removed, broadcasting: 3\nI0826 22:45:02.396854 28 log.go:172] (0xc0000f6fd0) (0xc000706000) Stream removed, broadcasting: 5\n" Aug 26 22:45:02.403: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-4358.svc.cluster.local\tcanonical name = externalsvc.services-4358.svc.cluster.local.\nName:\texternalsvc.services-4358.svc.cluster.local\nAddress: 10.98.164.211\n\n" STEP: deleting ReplicationController externalsvc in namespace services-4358, will wait for the garbage collector to delete the pods Aug 26 22:45:02.474: INFO: Deleting ReplicationController externalsvc took: 17.260623ms Aug 26 22:45:02.774: INFO: Terminating ReplicationController externalsvc pods took: 300.233967ms Aug 26 22:45:08.342: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 26 22:45:08.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4358" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:23.522 seconds] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":278,"completed":1,"skipped":6,"failed":0} SSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 26 22:45:08.390: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-2209 STEP: creating a selector STEP: Creating the service pods in kubernetes Aug 26 22:45:08.429: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Aug 26 22:45:38.643: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.130:8080/dial?request=hostname&protocol=udp&host=10.244.2.128&port=8081&tries=1'] Namespace:pod-network-test-2209 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 26 22:45:38.643: INFO: >>> kubeConfig: /root/.kube/config I0826 22:45:38.677506 6 log.go:172] (0xc001ecb810) (0xc002d661e0) Create stream I0826 22:45:38.677554 6 log.go:172] (0xc001ecb810) (0xc002d661e0) Stream added, broadcasting: 1 I0826 22:45:38.702882 6 log.go:172] (0xc001ecb810) Reply frame received for 1 I0826 22:45:38.702932 6 log.go:172] (0xc001ecb810) (0xc002e00500) Create stream I0826 22:45:38.702947 6 log.go:172] (0xc001ecb810) (0xc002e00500) Stream added, broadcasting: 3 I0826 22:45:38.704102 6 log.go:172] (0xc001ecb810) Reply frame received for 3 I0826 22:45:38.704154 6 log.go:172] (0xc001ecb810) (0xc002976000) Create stream I0826 22:45:38.704174 6 log.go:172] (0xc001ecb810) (0xc002976000) Stream added, broadcasting: 5 I0826 22:45:38.705242 6 log.go:172] (0xc001ecb810) Reply frame received for 5 I0826 22:45:38.773492 6 log.go:172] (0xc001ecb810) Data frame received for 3 I0826 22:45:38.773515 6 log.go:172] (0xc002e00500) (3) Data frame handling I0826 22:45:38.773527 6 log.go:172] (0xc002e00500) (3) Data frame sent I0826 22:45:38.773839 6 log.go:172] (0xc001ecb810) Data frame received for 3 I0826 22:45:38.773861 6 log.go:172] (0xc002e00500) (3) Data frame handling I0826 22:45:38.773890 6 log.go:172] (0xc001ecb810) Data frame received for 5 I0826 22:45:38.773914 6 log.go:172] (0xc002976000) (5) Data frame handling I0826 22:45:38.775190 6 log.go:172] (0xc001ecb810) Data frame received for 1 I0826 22:45:38.775221 6 log.go:172] (0xc002d661e0) (1) Data frame handling I0826 22:45:38.775260 6 log.go:172] (0xc002d661e0) (1) Data frame sent I0826 22:45:38.775289 6 log.go:172] (0xc001ecb810) (0xc002d661e0) Stream removed, broadcasting: 1 I0826 22:45:38.775384 6 log.go:172] (0xc001ecb810) Go away received I0826 22:45:38.775630 6 log.go:172] (0xc001ecb810) (0xc002d661e0) Stream removed, broadcasting: 1 I0826 22:45:38.775646 6 log.go:172] (0xc001ecb810) (0xc002e00500) Stream removed, broadcasting: 3 I0826 22:45:38.775666 6 log.go:172] (0xc001ecb810) (0xc002976000) Stream removed, broadcasting: 5 Aug 26 22:45:38.775: INFO: Waiting for responses: map[] Aug 26 22:45:38.778: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.130:8080/dial?request=hostname&protocol=udp&host=10.244.1.249&port=8081&tries=1'] Namespace:pod-network-test-2209 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 26 22:45:38.778: INFO: >>> kubeConfig: /root/.kube/config I0826 22:45:38.814046 6 log.go:172] (0xc001ecbef0) (0xc002d66500) Create stream I0826 22:45:38.814078 6 log.go:172] (0xc001ecbef0) (0xc002d66500) Stream added, broadcasting: 1 I0826 22:45:38.816960 6 log.go:172] (0xc001ecbef0) Reply frame received for 1 I0826 22:45:38.817005 6 log.go:172] (0xc001ecbef0) (0xc002d665a0) Create stream I0826 22:45:38.817027 6 log.go:172] (0xc001ecbef0) (0xc002d665a0) Stream added, broadcasting: 3 I0826 22:45:38.818304 6 log.go:172] (0xc001ecbef0) Reply frame received for 3 I0826 22:45:38.818401 6 log.go:172] (0xc001ecbef0) (0xc0029760a0) Create stream I0826 22:45:38.818431 6 log.go:172] (0xc001ecbef0) (0xc0029760a0) Stream added, broadcasting: 5 I0826 22:45:38.819511 6 log.go:172] (0xc001ecbef0) Reply frame received for 5 I0826 22:45:38.905644 6 log.go:172] (0xc001ecbef0) Data frame received for 3 I0826 22:45:38.905677 6 log.go:172] (0xc002d665a0) (3) Data frame handling I0826 22:45:38.905700 6 log.go:172] (0xc002d665a0) (3) Data frame sent I0826 22:45:38.906391 6 log.go:172] (0xc001ecbef0) Data frame received for 5 I0826 22:45:38.906434 6 log.go:172] (0xc0029760a0) (5) Data frame handling I0826 22:45:38.906473 6 log.go:172] (0xc001ecbef0) Data frame received for 3 I0826 22:45:38.906497 6 log.go:172] (0xc002d665a0) (3) Data frame handling I0826 22:45:38.908203 6 log.go:172] (0xc001ecbef0) Data frame received for 1 I0826 22:45:38.908223 6 log.go:172] (0xc002d66500) (1) Data frame handling I0826 22:45:38.908234 6 log.go:172] (0xc002d66500) (1) Data frame sent I0826 22:45:38.908251 6 log.go:172] (0xc001ecbef0) (0xc002d66500) Stream removed, broadcasting: 1 I0826 22:45:38.908330 6 log.go:172] (0xc001ecbef0) Go away received I0826 22:45:38.908386 6 log.go:172] (0xc001ecbef0) (0xc002d66500) Stream removed, broadcasting: 1 I0826 22:45:38.908408 6 log.go:172] (0xc001ecbef0) (0xc002d665a0) Stream removed, broadcasting: 3 I0826 22:45:38.908423 6 log.go:172] (0xc001ecbef0) (0xc0029760a0) Stream removed, broadcasting: 5 Aug 26 22:45:38.908: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 26 22:45:38.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-2209" for this suite. • [SLOW TEST:30.525 seconds] [sig-network] Networking /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":2,"skipped":10,"failed":0} SSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 26 22:45:38.916: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 26 22:45:38.961: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Aug 26 22:45:41.259: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 26 22:45:42.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-2288" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":278,"completed":3,"skipped":17,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 26 22:45:42.391: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should get a host IP [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating pod Aug 26 22:45:49.541: INFO: Pod pod-hostip-6b447842-047e-4596-9176-ea982269d352 has hostIP: 172.18.0.3 [AfterEach] [k8s.io] Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 26 22:45:49.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9159" for this suite. • [SLOW TEST:7.199 seconds] [k8s.io] Pods /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should get a host IP [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":278,"completed":4,"skipped":28,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 26 22:45:49.590: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 26 22:46:01.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5411" for this suite. • [SLOW TEST:11.948 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":278,"completed":5,"skipped":51,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 26 22:46:01.538: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-7490bd74-4f54-419b-81f5-40aa4aa48527 STEP: Creating a pod to test consume configMaps Aug 26 22:46:01.656: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-33585a80-ff36-411c-84ee-1acf22ca3f5f" in namespace "projected-8592" to be "success or failure" Aug 26 22:46:01.663: INFO: Pod "pod-projected-configmaps-33585a80-ff36-411c-84ee-1acf22ca3f5f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.131394ms Aug 26 22:46:03.667: INFO: Pod "pod-projected-configmaps-33585a80-ff36-411c-84ee-1acf22ca3f5f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010075283s Aug 26 22:46:05.673: INFO: Pod "pod-projected-configmaps-33585a80-ff36-411c-84ee-1acf22ca3f5f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.016265787s Aug 26 22:46:07.676: INFO: Pod "pod-projected-configmaps-33585a80-ff36-411c-84ee-1acf22ca3f5f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.019543616s STEP: Saw pod success Aug 26 22:46:07.676: INFO: Pod "pod-projected-configmaps-33585a80-ff36-411c-84ee-1acf22ca3f5f" satisfied condition "success or failure" Aug 26 22:46:07.679: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-33585a80-ff36-411c-84ee-1acf22ca3f5f container projected-configmap-volume-test: STEP: delete the pod Aug 26 22:46:07.879: INFO: Waiting for pod pod-projected-configmaps-33585a80-ff36-411c-84ee-1acf22ca3f5f to disappear Aug 26 22:46:08.051: INFO: Pod pod-projected-configmaps-33585a80-ff36-411c-84ee-1acf22ca3f5f no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 26 22:46:08.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8592" for this suite. • [SLOW TEST:6.707 seconds] [sig-storage] Projected configMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":278,"completed":6,"skipped":58,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Deprecated] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 26 22:46:08.245: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [BeforeEach] Kubectl run deployment /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1629 [It] should create a deployment from an image [Deprecated] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Aug 26 22:46:08.566: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --generator=deployment/apps.v1 --namespace=kubectl-9548' Aug 26 22:46:08.731: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Aug 26 22:46:08.731: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the deployment e2e-test-httpd-deployment was created STEP: verifying the pod controlled by deployment e2e-test-httpd-deployment was created [AfterEach] Kubectl run deployment /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1634 Aug 26 22:46:10.909: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-9548' Aug 26 22:46:11.856: INFO: stderr: "" Aug 26 22:46:11.856: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 26 22:46:11.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9548" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Deprecated] [Conformance]","total":278,"completed":7,"skipped":60,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 26 22:46:12.266: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 26 22:46:14.333: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 26 22:46:16.531: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734078774, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734078774, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734078774, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734078774, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 26 22:46:18.602: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734078774, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734078774, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734078774, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734078774, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 26 22:46:21.806: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 26 22:46:21.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1577" for this suite. STEP: Destroying namespace "webhook-1577-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.682 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":278,"completed":8,"skipped":72,"failed":0} SSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 26 22:46:21.948: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:182 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 26 22:46:22.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4494" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":278,"completed":9,"skipped":75,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 26 22:46:22.077: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 26 22:46:38.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9129" for this suite. • [SLOW TEST:16.875 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":278,"completed":10,"skipped":112,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 26 22:46:38.952: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Docker Containers /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 26 22:46:46.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-7077" for this suite. • [SLOW TEST:7.655 seconds] [k8s.io] Docker Containers /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":278,"completed":11,"skipped":125,"failed":0} SSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 26 22:46:46.607: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [BeforeEach] Kubectl run pod /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1760 [It] should create a pod from an image when restart is Never [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Aug 26 22:46:47.544: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-6767' Aug 26 22:46:47.830: INFO: stderr: "" Aug 26 22:46:47.830: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1765 Aug 26 22:46:48.028: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-6767' Aug 26 22:47:01.864: INFO: stderr: "" Aug 26 22:47:01.864: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 26 22:47:01.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6767" for this suite. • [SLOW TEST:15.580 seconds] [sig-cli] Kubectl client /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1756 should create a pod from an image when restart is Never [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":278,"completed":12,"skipped":130,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 26 22:47:02.188: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 26 22:47:02.719: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 26 22:47:03.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-7307" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":278,"completed":13,"skipped":133,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 26 22:47:04.180: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name secret-emptykey-test-3961c39c-2ce5-4b87-be28-73475bf21911 [AfterEach] [sig-api-machinery] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 26 22:47:04.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8107" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":278,"completed":14,"skipped":144,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 26 22:47:04.536: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 26 22:47:21.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5907" for this suite. • [SLOW TEST:16.484 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":278,"completed":15,"skipped":179,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 26 22:47:21.021: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override command Aug 26 22:47:21.126: INFO: Waiting up to 5m0s for pod "client-containers-4df2a17c-1e56-42e5-b83e-ab4d068cfdf3" in namespace "containers-7151" to be "success or failure" Aug 26 22:47:21.151: INFO: Pod "client-containers-4df2a17c-1e56-42e5-b83e-ab4d068cfdf3": Phase="Pending", Reason="", readiness=false. Elapsed: 24.428464ms Aug 26 22:47:23.154: INFO: Pod "client-containers-4df2a17c-1e56-42e5-b83e-ab4d068cfdf3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027147971s Aug 26 22:47:25.157: INFO: Pod "client-containers-4df2a17c-1e56-42e5-b83e-ab4d068cfdf3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030702203s STEP: Saw pod success Aug 26 22:47:25.157: INFO: Pod "client-containers-4df2a17c-1e56-42e5-b83e-ab4d068cfdf3" satisfied condition "success or failure" Aug 26 22:47:25.173: INFO: Trying to get logs from node jerma-worker pod client-containers-4df2a17c-1e56-42e5-b83e-ab4d068cfdf3 container test-container: STEP: delete the pod Aug 26 22:47:25.232: INFO: Waiting for pod client-containers-4df2a17c-1e56-42e5-b83e-ab4d068cfdf3 to disappear Aug 26 22:47:25.256: INFO: Pod client-containers-4df2a17c-1e56-42e5-b83e-ab4d068cfdf3 no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 26 22:47:25.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-7151" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":278,"completed":16,"skipped":190,"failed":0} SSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 26 22:47:25.262: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 26 22:47:25.451: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-6fe81184-8a43-4f54-949e-d65695be9fc1" in namespace "security-context-test-8717" to be "success or failure" Aug 26 22:47:25.454: INFO: Pod "busybox-privileged-false-6fe81184-8a43-4f54-949e-d65695be9fc1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.818147ms Aug 26 22:47:27.458: INFO: Pod "busybox-privileged-false-6fe81184-8a43-4f54-949e-d65695be9fc1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006338334s Aug 26 22:47:29.482: INFO: Pod "busybox-privileged-false-6fe81184-8a43-4f54-949e-d65695be9fc1": Phase="Running", Reason="", readiness=true. Elapsed: 4.031236304s Aug 26 22:47:31.486: INFO: Pod "busybox-privileged-false-6fe81184-8a43-4f54-949e-d65695be9fc1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.035170522s Aug 26 22:47:31.486: INFO: Pod "busybox-privileged-false-6fe81184-8a43-4f54-949e-d65695be9fc1" satisfied condition "success or failure" Aug 26 22:47:31.493: INFO: Got logs for pod "busybox-privileged-false-6fe81184-8a43-4f54-949e-d65695be9fc1": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 26 22:47:31.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-8717" for this suite. • [SLOW TEST:6.238 seconds] [k8s.io] Security Context /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 When creating a pod with privileged /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:225 should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":17,"skipped":193,"failed":0} SSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 26 22:47:31.500: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-da544920-7211-455c-8a27-8c345f7eabc7 STEP: Creating a pod to test consume secrets Aug 26 22:47:31.779: INFO: Waiting up to 5m0s for pod "pod-secrets-e0870618-3105-4ef8-9fdd-89728681fbc0" in namespace "secrets-6727" to be "success or failure" Aug 26 22:47:31.872: INFO: Pod "pod-secrets-e0870618-3105-4ef8-9fdd-89728681fbc0": Phase="Pending", Reason="", readiness=false. Elapsed: 93.41731ms Aug 26 22:47:33.876: INFO: Pod "pod-secrets-e0870618-3105-4ef8-9fdd-89728681fbc0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.097214225s Aug 26 22:47:35.992: INFO: Pod "pod-secrets-e0870618-3105-4ef8-9fdd-89728681fbc0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.213712835s Aug 26 22:47:37.996: INFO: Pod "pod-secrets-e0870618-3105-4ef8-9fdd-89728681fbc0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.217366388s STEP: Saw pod success Aug 26 22:47:37.996: INFO: Pod "pod-secrets-e0870618-3105-4ef8-9fdd-89728681fbc0" satisfied condition "success or failure" Aug 26 22:47:37.998: INFO: Trying to get logs from node jerma-worker pod pod-secrets-e0870618-3105-4ef8-9fdd-89728681fbc0 container secret-volume-test: STEP: delete the pod Aug 26 22:47:38.196: INFO: Waiting for pod pod-secrets-e0870618-3105-4ef8-9fdd-89728681fbc0 to disappear Aug 26 22:47:38.274: INFO: Pod pod-secrets-e0870618-3105-4ef8-9fdd-89728681fbc0 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 26 22:47:38.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6727" for this suite. • [SLOW TEST:6.782 seconds] [sig-storage] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":18,"skipped":196,"failed":0} [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 26 22:47:38.283: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 26 22:47:38.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9386" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":278,"completed":19,"skipped":196,"failed":0} SSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] PreStop /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 26 22:47:38.418: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:172 [It] should call prestop when killing a pod [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating server pod server in namespace prestop-9701 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-9701 STEP: Deleting pre-stop pod Aug 26 22:47:51.882: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 26 22:47:51.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-9701" for this suite. • [SLOW TEST:13.491 seconds] [k8s.io] [sig-node] PreStop /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should call prestop when killing a pod [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":278,"completed":20,"skipped":200,"failed":0} [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 26 22:47:51.909: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 26 22:47:57.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-1774" for this suite. • [SLOW TEST:5.123 seconds] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not conflict [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":278,"completed":21,"skipped":200,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 26 22:47:57.033: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-v5sc STEP: Creating a pod to test atomic-volume-subpath Aug 26 22:47:57.640: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-v5sc" in namespace "subpath-4296" to be "success or failure" Aug 26 22:47:57.643: INFO: Pod "pod-subpath-test-configmap-v5sc": Phase="Pending", Reason="", readiness=false. Elapsed: 3.834577ms Aug 26 22:47:59.650: INFO: Pod "pod-subpath-test-configmap-v5sc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010854412s Aug 26 22:48:01.668: INFO: Pod "pod-subpath-test-configmap-v5sc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02884612s Aug 26 22:48:03.677: INFO: Pod "pod-subpath-test-configmap-v5sc": Phase="Running", Reason="", readiness=true. Elapsed: 6.037757767s Aug 26 22:48:05.681: INFO: Pod "pod-subpath-test-configmap-v5sc": Phase="Running", Reason="", readiness=true. Elapsed: 8.041381315s Aug 26 22:48:07.686: INFO: Pod "pod-subpath-test-configmap-v5sc": Phase="Running", Reason="", readiness=true. Elapsed: 10.046862882s Aug 26 22:48:09.762: INFO: Pod "pod-subpath-test-configmap-v5sc": Phase="Running", Reason="", readiness=true. Elapsed: 12.12197538s Aug 26 22:48:11.766: INFO: Pod "pod-subpath-test-configmap-v5sc": Phase="Running", Reason="", readiness=true. Elapsed: 14.126012008s Aug 26 22:48:13.794: INFO: Pod "pod-subpath-test-configmap-v5sc": Phase="Running", Reason="", readiness=true. Elapsed: 16.154717796s Aug 26 22:48:15.923: INFO: Pod "pod-subpath-test-configmap-v5sc": Phase="Running", Reason="", readiness=true. Elapsed: 18.283665378s Aug 26 22:48:17.928: INFO: Pod "pod-subpath-test-configmap-v5sc": Phase="Running", Reason="", readiness=true. Elapsed: 20.287953813s Aug 26 22:48:19.932: INFO: Pod "pod-subpath-test-configmap-v5sc": Phase="Running", Reason="", readiness=true. Elapsed: 22.291984217s Aug 26 22:48:21.945: INFO: Pod "pod-subpath-test-configmap-v5sc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.304914614s STEP: Saw pod success Aug 26 22:48:21.945: INFO: Pod "pod-subpath-test-configmap-v5sc" satisfied condition "success or failure" Aug 26 22:48:21.947: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-configmap-v5sc container test-container-subpath-configmap-v5sc: STEP: delete the pod Aug 26 22:48:21.983: INFO: Waiting for pod pod-subpath-test-configmap-v5sc to disappear Aug 26 22:48:21.997: INFO: Pod pod-subpath-test-configmap-v5sc no longer exists STEP: Deleting pod pod-subpath-test-configmap-v5sc Aug 26 22:48:21.997: INFO: Deleting pod "pod-subpath-test-configmap-v5sc" in namespace "subpath-4296" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 26 22:48:22.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4296" for this suite. • [SLOW TEST:24.979 seconds] [sig-storage] Subpath /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":278,"completed":22,"skipped":214,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 26 22:48:22.012: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 26 22:48:22.935: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 26 22:48:25.025: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734078903, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734078903, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734078903, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734078902, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 26 22:48:28.055: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 26 22:48:28.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5" for this suite. STEP: Destroying namespace "webhook-5-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.316 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":278,"completed":23,"skipped":241,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 26 22:48:28.328: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test substitution in container's args Aug 26 22:48:28.411: INFO: Waiting up to 5m0s for pod "var-expansion-225f4df3-730d-4c41-8349-e4f480df5a24" in namespace "var-expansion-6884" to be "success or failure" Aug 26 22:48:28.445: INFO: Pod "var-expansion-225f4df3-730d-4c41-8349-e4f480df5a24": Phase="Pending", Reason="", readiness=false. Elapsed: 34.422609ms Aug 26 22:48:30.477: INFO: Pod "var-expansion-225f4df3-730d-4c41-8349-e4f480df5a24": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066508084s Aug 26 22:48:32.480: INFO: Pod "var-expansion-225f4df3-730d-4c41-8349-e4f480df5a24": Phase="Running", Reason="", readiness=true. Elapsed: 4.069505957s Aug 26 22:48:34.484: INFO: Pod "var-expansion-225f4df3-730d-4c41-8349-e4f480df5a24": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.073358823s STEP: Saw pod success Aug 26 22:48:34.484: INFO: Pod "var-expansion-225f4df3-730d-4c41-8349-e4f480df5a24" satisfied condition "success or failure" Aug 26 22:48:34.487: INFO: Trying to get logs from node jerma-worker2 pod var-expansion-225f4df3-730d-4c41-8349-e4f480df5a24 container dapi-container: STEP: delete the pod Aug 26 22:48:34.523: INFO: Waiting for pod var-expansion-225f4df3-730d-4c41-8349-e4f480df5a24 to disappear Aug 26 22:48:34.528: INFO: Pod var-expansion-225f4df3-730d-4c41-8349-e4f480df5a24 no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 26 22:48:34.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-6884" for this suite. • [SLOW TEST:6.206 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":278,"completed":24,"skipped":265,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 26 22:48:34.535: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 26 22:48:50.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5743" for this suite. • [SLOW TEST:16.326 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":278,"completed":25,"skipped":285,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 26 22:48:50.861: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-c3d19b46-81b0-4010-b1ad-009635a8fc50 STEP: Creating a pod to test consume configMaps Aug 26 22:48:50.982: INFO: Waiting up to 5m0s for pod "pod-configmaps-80b397ea-7e35-413e-9399-85f208c0605f" in namespace "configmap-596" to be "success or failure" Aug 26 22:48:51.014: INFO: Pod "pod-configmaps-80b397ea-7e35-413e-9399-85f208c0605f": Phase="Pending", Reason="", readiness=false. Elapsed: 31.499006ms Aug 26 22:48:53.089: INFO: Pod "pod-configmaps-80b397ea-7e35-413e-9399-85f208c0605f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.106631775s Aug 26 22:48:55.093: INFO: Pod "pod-configmaps-80b397ea-7e35-413e-9399-85f208c0605f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.110459136s Aug 26 22:48:57.098: INFO: Pod "pod-configmaps-80b397ea-7e35-413e-9399-85f208c0605f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.115192533s STEP: Saw pod success Aug 26 22:48:57.098: INFO: Pod "pod-configmaps-80b397ea-7e35-413e-9399-85f208c0605f" satisfied condition "success or failure" Aug 26 22:48:57.100: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-80b397ea-7e35-413e-9399-85f208c0605f container configmap-volume-test: STEP: delete the pod Aug 26 22:48:57.240: INFO: Waiting for pod pod-configmaps-80b397ea-7e35-413e-9399-85f208c0605f to disappear Aug 26 22:48:57.508: INFO: Pod pod-configmaps-80b397ea-7e35-413e-9399-85f208c0605f no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 26 22:48:57.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-596" for this suite. • [SLOW TEST:6.676 seconds] [sig-storage] ConfigMap /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":26,"skipped":294,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 26 22:48:57.537: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Aug 26 22:48:57.673: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Aug 26 22:48:57.684: INFO: Waiting for terminating namespaces to be deleted... Aug 26 22:48:57.685: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Aug 26 22:48:57.689: INFO: daemon-set-4l8wc from daemonsets-9371 started at 2020-08-20 20:09:22 +0000 UTC (1 container statuses recorded) Aug 26 22:48:57.689: INFO: Container app ready: true, restart count 0 Aug 26 22:48:57.689: INFO: kube-proxy-lgd85 from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded) Aug 26 22:48:57.689: INFO: Container kube-proxy ready: true, restart count 0 Aug 26 22:48:57.689: INFO: kindnet-tfrcx from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded) Aug 26 22:48:57.689: INFO: Container kindnet-cni ready: true, restart count 0 Aug 26 22:48:57.689: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Aug 26 22:48:57.693: INFO: kube-proxy-ckhpn from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded) Aug 26 22:48:57.693: INFO: Container kube-proxy ready: true, restart count 0 Aug 26 22:48:57.693: INFO: test-recreate-deployment-5f94c574ff-k4dkm from deployment-5601 started at 2020-08-23 04:50:56 +0000 UTC (1 container statuses recorded) Aug 26 22:48:57.693: INFO: Container httpd ready: true, restart count 0 Aug 26 22:48:57.693: INFO: kindnet-gxck9 from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded) Aug 26 22:48:57.693: INFO: Container kindnet-cni ready: true, restart count 0 Aug 26 22:48:57.693: INFO: daemon-set-cxv46 from daemonsets-9371 started at 2020-08-20 20:09:22 +0000 UTC (1 container statuses recorded) Aug 26 22:48:57.693: INFO: Container app ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: verifying the node has the label node jerma-worker STEP: verifying the node has the label node jerma-worker2 Aug 26 22:48:57.885: INFO: Pod daemon-set-4l8wc requesting resource cpu=0m on Node jerma-worker Aug 26 22:48:57.885: INFO: Pod daemon-set-cxv46 requesting resource cpu=0m on Node jerma-worker2 Aug 26 22:48:57.885: INFO: Pod test-recreate-deployment-5f94c574ff-k4dkm requesting resource cpu=0m on Node jerma-worker2 Aug 26 22:48:57.885: INFO: Pod kindnet-gxck9 requesting resource cpu=100m on Node jerma-worker2 Aug 26 22:48:57.885: INFO: Pod kindnet-tfrcx requesting resource cpu=100m on Node jerma-worker Aug 26 22:48:57.885: INFO: Pod kube-proxy-ckhpn requesting resource cpu=0m on Node jerma-worker2 Aug 26 22:48:57.885: INFO: Pod kube-proxy-lgd85 requesting resource cpu=0m on Node jerma-worker STEP: Starting Pods to consume most of the cluster CPU. Aug 26 22:48:57.885: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker Aug 26 22:48:57.910: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-6760dd7a-bb99-482b-847f-9a57a368e448.162ef3090e90fb8c], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2610/filler-pod-6760dd7a-bb99-482b-847f-9a57a368e448 to jerma-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-6760dd7a-bb99-482b-847f-9a57a368e448.162ef3099315e10e], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-6760dd7a-bb99-482b-847f-9a57a368e448.162ef30a06234183], Reason = [Created], Message = [Created container filler-pod-6760dd7a-bb99-482b-847f-9a57a368e448] STEP: Considering event: Type = [Normal], Name = [filler-pod-6760dd7a-bb99-482b-847f-9a57a368e448.162ef30a17b5b462], Reason = [Started], Message = [Started container filler-pod-6760dd7a-bb99-482b-847f-9a57a368e448] STEP: Considering event: Type = [Normal], Name = [filler-pod-e2913dac-0616-473d-b4d7-8c46fe4a2fcb.162ef3090a75d1d4], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2610/filler-pod-e2913dac-0616-473d-b4d7-8c46fe4a2fcb to jerma-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-e2913dac-0616-473d-b4d7-8c46fe4a2fcb.162ef3095657eb9d], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-e2913dac-0616-473d-b4d7-8c46fe4a2fcb.162ef309cce792d8], Reason = [Created], Message = [Created container filler-pod-e2913dac-0616-473d-b4d7-8c46fe4a2fcb] STEP: Considering event: Type = [Normal], Name = [filler-pod-e2913dac-0616-473d-b4d7-8c46fe4a2fcb.162ef309eb503457], Reason = [Started], Message = [Started container filler-pod-e2913dac-0616-473d-b4d7-8c46fe4a2fcb] STEP: Considering event: Type = [Warning], Name = [additional-pod.162ef30a74fd916b], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: Considering event: Type = [Warning], Name = [additional-pod.162ef30a7afa9c2a], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node jerma-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node jerma-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 26 22:49:05.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-2610" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:7.990 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":278,"completed":27,"skipped":306,"failed":0} SSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 26 22:49:05.527: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 26 22:49:06.793: INFO: (0) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 26 22:49:08.473: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 26 22:49:10.482: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734078948, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734078948, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734078948, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734078948, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 26 22:49:13.515: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource with different stored version [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 26 22:49:13.520: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-8719-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource while v1 is storage version
STEP: Patching Custom Resource Definition to set v2 as storage
STEP: Patching the custom resource while v2 is storage version
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 22:49:14.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3725" for this suite.
STEP: Destroying namespace "webhook-3725-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:7.904 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with different stored version [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":278,"completed":29,"skipped":315,"failed":0}
SSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 22:49:14.906: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap configmap-6938/configmap-test-7b1145e0-af23-4f84-b7d8-ecc11cc82526
STEP: Creating a pod to test consume configMaps
Aug 26 22:49:15.011: INFO: Waiting up to 5m0s for pod "pod-configmaps-73756c78-93ea-416a-ad6d-2d8b23c65728" in namespace "configmap-6938" to be "success or failure"
Aug 26 22:49:15.047: INFO: Pod "pod-configmaps-73756c78-93ea-416a-ad6d-2d8b23c65728": Phase="Pending", Reason="", readiness=false. Elapsed: 35.96524ms
Aug 26 22:49:17.071: INFO: Pod "pod-configmaps-73756c78-93ea-416a-ad6d-2d8b23c65728": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059051255s
Aug 26 22:49:19.089: INFO: Pod "pod-configmaps-73756c78-93ea-416a-ad6d-2d8b23c65728": Phase="Running", Reason="", readiness=true. Elapsed: 4.077515342s
Aug 26 22:49:21.093: INFO: Pod "pod-configmaps-73756c78-93ea-416a-ad6d-2d8b23c65728": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.081376822s
STEP: Saw pod success
Aug 26 22:49:21.093: INFO: Pod "pod-configmaps-73756c78-93ea-416a-ad6d-2d8b23c65728" satisfied condition "success or failure"
Aug 26 22:49:21.095: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-73756c78-93ea-416a-ad6d-2d8b23c65728 container env-test: 
STEP: delete the pod
Aug 26 22:49:21.111: INFO: Waiting for pod pod-configmaps-73756c78-93ea-416a-ad6d-2d8b23c65728 to disappear
Aug 26 22:49:21.116: INFO: Pod pod-configmaps-73756c78-93ea-416a-ad6d-2d8b23c65728 no longer exists
[AfterEach] [sig-node] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 22:49:21.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6938" for this suite.

• [SLOW TEST:6.217 seconds]
[sig-node] ConfigMap
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":278,"completed":30,"skipped":325,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 22:49:21.123: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 26 22:49:21.230: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7be43d4e-d786-4306-8c00-1b60177e4b0b" in namespace "projected-8213" to be "success or failure"
Aug 26 22:49:21.237: INFO: Pod "downwardapi-volume-7be43d4e-d786-4306-8c00-1b60177e4b0b": Phase="Pending", Reason="", readiness=false. Elapsed: 7.360935ms
Aug 26 22:49:23.241: INFO: Pod "downwardapi-volume-7be43d4e-d786-4306-8c00-1b60177e4b0b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010624793s
Aug 26 22:49:25.246: INFO: Pod "downwardapi-volume-7be43d4e-d786-4306-8c00-1b60177e4b0b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016472909s
STEP: Saw pod success
Aug 26 22:49:25.246: INFO: Pod "downwardapi-volume-7be43d4e-d786-4306-8c00-1b60177e4b0b" satisfied condition "success or failure"
Aug 26 22:49:25.252: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-7be43d4e-d786-4306-8c00-1b60177e4b0b container client-container: 
STEP: delete the pod
Aug 26 22:49:25.384: INFO: Waiting for pod downwardapi-volume-7be43d4e-d786-4306-8c00-1b60177e4b0b to disappear
Aug 26 22:49:25.416: INFO: Pod downwardapi-volume-7be43d4e-d786-4306-8c00-1b60177e4b0b no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 22:49:25.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8213" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":31,"skipped":341,"failed":0}
SSSSSSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 22:49:25.426: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name s-test-opt-del-4504aca2-682c-4cee-a144-18d6cc412f82
STEP: Creating secret with name s-test-opt-upd-cf821663-a2d0-4699-9807-30462d7d45db
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-4504aca2-682c-4cee-a144-18d6cc412f82
STEP: Updating secret s-test-opt-upd-cf821663-a2d0-4699-9807-30462d7d45db
STEP: Creating secret with name s-test-opt-create-9bc52278-d5fc-41fe-9584-a31b66daa8b6
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 22:50:52.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8717" for this suite.

• [SLOW TEST:87.485 seconds]
[sig-storage] Secrets
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":32,"skipped":348,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 22:50:52.912: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-e1be35fe-b71c-468f-ba6d-6c0bbb94e7e1
STEP: Creating a pod to test consume configMaps
Aug 26 22:50:53.326: INFO: Waiting up to 5m0s for pod "pod-configmaps-d863a5e8-9137-4e1f-99ba-d3923fcf5722" in namespace "configmap-4207" to be "success or failure"
Aug 26 22:50:53.395: INFO: Pod "pod-configmaps-d863a5e8-9137-4e1f-99ba-d3923fcf5722": Phase="Pending", Reason="", readiness=false. Elapsed: 69.379144ms
Aug 26 22:50:55.398: INFO: Pod "pod-configmaps-d863a5e8-9137-4e1f-99ba-d3923fcf5722": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072127006s
Aug 26 22:50:57.401: INFO: Pod "pod-configmaps-d863a5e8-9137-4e1f-99ba-d3923fcf5722": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.075835928s
STEP: Saw pod success
Aug 26 22:50:57.401: INFO: Pod "pod-configmaps-d863a5e8-9137-4e1f-99ba-d3923fcf5722" satisfied condition "success or failure"
Aug 26 22:50:57.404: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-d863a5e8-9137-4e1f-99ba-d3923fcf5722 container configmap-volume-test: 
STEP: delete the pod
Aug 26 22:50:57.435: INFO: Waiting for pod pod-configmaps-d863a5e8-9137-4e1f-99ba-d3923fcf5722 to disappear
Aug 26 22:50:57.445: INFO: Pod pod-configmaps-d863a5e8-9137-4e1f-99ba-d3923fcf5722 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 22:50:57.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4207" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":33,"skipped":406,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 22:50:57.767: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a service. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Service
STEP: Ensuring resource quota status captures service creation
STEP: Deleting a Service
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 22:51:09.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-5974" for this suite.

• [SLOW TEST:11.863 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":278,"completed":34,"skipped":419,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 22:51:09.631: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1736.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1736.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 26 22:51:15.746: INFO: DNS probes using dns-1736/dns-test-d7751239-c57f-4faa-b4d8-64a0ccbc1a5c succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 22:51:15.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-1736" for this suite.

• [SLOW TEST:6.257 seconds]
[sig-network] DNS
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for the cluster  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster  [Conformance]","total":278,"completed":35,"skipped":442,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD without validation schema [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 22:51:15.888: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD without validation schema [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 26 22:51:16.086: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Aug 26 22:51:19.246: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1849 create -f -'
Aug 26 22:51:22.849: INFO: stderr: ""
Aug 26 22:51:22.849: INFO: stdout: "e2e-test-crd-publish-openapi-2115-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n"
Aug 26 22:51:22.849: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1849 delete e2e-test-crd-publish-openapi-2115-crds test-cr'
Aug 26 22:51:22.989: INFO: stderr: ""
Aug 26 22:51:22.989: INFO: stdout: "e2e-test-crd-publish-openapi-2115-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n"
Aug 26 22:51:22.989: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1849 apply -f -'
Aug 26 22:51:23.257: INFO: stderr: ""
Aug 26 22:51:23.257: INFO: stdout: "e2e-test-crd-publish-openapi-2115-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n"
Aug 26 22:51:23.258: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1849 delete e2e-test-crd-publish-openapi-2115-crds test-cr'
Aug 26 22:51:23.372: INFO: stderr: ""
Aug 26 22:51:23.372: INFO: stdout: "e2e-test-crd-publish-openapi-2115-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR without validation schema
Aug 26 22:51:23.372: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2115-crds'
Aug 26 22:51:23.591: INFO: stderr: ""
Aug 26 22:51:23.591: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-2115-crd\nVERSION:  crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n     \n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 22:51:25.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-1849" for this suite.

• [SLOW TEST:9.588 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD without validation schema [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":278,"completed":36,"skipped":448,"failed":0}
SSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 22:51:25.477: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir volume type on node default medium
Aug 26 22:51:25.587: INFO: Waiting up to 5m0s for pod "pod-fc1cb4fa-f626-4058-b0f5-3b95df645543" in namespace "emptydir-3773" to be "success or failure"
Aug 26 22:51:25.599: INFO: Pod "pod-fc1cb4fa-f626-4058-b0f5-3b95df645543": Phase="Pending", Reason="", readiness=false. Elapsed: 11.796637ms
Aug 26 22:51:27.623: INFO: Pod "pod-fc1cb4fa-f626-4058-b0f5-3b95df645543": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03574998s
Aug 26 22:51:29.626: INFO: Pod "pod-fc1cb4fa-f626-4058-b0f5-3b95df645543": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038696411s
STEP: Saw pod success
Aug 26 22:51:29.626: INFO: Pod "pod-fc1cb4fa-f626-4058-b0f5-3b95df645543" satisfied condition "success or failure"
Aug 26 22:51:29.628: INFO: Trying to get logs from node jerma-worker2 pod pod-fc1cb4fa-f626-4058-b0f5-3b95df645543 container test-container: 
STEP: delete the pod
Aug 26 22:51:29.701: INFO: Waiting for pod pod-fc1cb4fa-f626-4058-b0f5-3b95df645543 to disappear
Aug 26 22:51:29.784: INFO: Pod pod-fc1cb4fa-f626-4058-b0f5-3b95df645543 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 22:51:29.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3773" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":37,"skipped":454,"failed":0}

------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Aggregator
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 22:51:29.792: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename aggregator
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Aggregator
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76
Aug 26 22:51:29.952: INFO: >>> kubeConfig: /root/.kube/config
[It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the sample API server.
Aug 26 22:51:30.715: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set
Aug 26 22:51:33.143: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734079090, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734079090, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734079090, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734079090, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 22:51:35.146: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734079090, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734079090, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734079090, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734079090, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 22:51:37.678: INFO: Waited 523.750082ms for the sample-apiserver to be ready to handle requests.
[AfterEach] [sig-api-machinery] Aggregator
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67
[AfterEach] [sig-api-machinery] Aggregator
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 22:51:38.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-1493" for this suite.

• [SLOW TEST:8.455 seconds]
[sig-api-machinery] Aggregator
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]","total":278,"completed":38,"skipped":454,"failed":0}
SSSSSSSS
------------------------------
[k8s.io] Security Context When creating a pod with readOnlyRootFilesystem 
  should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Security Context
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 22:51:38.248: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39
[It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 26 22:51:38.683: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-ad121170-5533-4a48-aa56-5bcdc27bd693" in namespace "security-context-test-8298" to be "success or failure"
Aug 26 22:51:38.710: INFO: Pod "busybox-readonly-false-ad121170-5533-4a48-aa56-5bcdc27bd693": Phase="Pending", Reason="", readiness=false. Elapsed: 27.49747ms
Aug 26 22:51:40.761: INFO: Pod "busybox-readonly-false-ad121170-5533-4a48-aa56-5bcdc27bd693": Phase="Pending", Reason="", readiness=false. Elapsed: 2.077631957s
Aug 26 22:51:42.900: INFO: Pod "busybox-readonly-false-ad121170-5533-4a48-aa56-5bcdc27bd693": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.216610727s
Aug 26 22:51:42.900: INFO: Pod "busybox-readonly-false-ad121170-5533-4a48-aa56-5bcdc27bd693" satisfied condition "success or failure"
[AfterEach] [k8s.io] Security Context
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 22:51:42.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-8298" for this suite.
•{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":278,"completed":39,"skipped":462,"failed":0}
SSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 22:51:42.909: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 22:51:47.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-7739" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":278,"completed":40,"skipped":467,"failed":0}
SSSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 22:51:47.080: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name cm-test-opt-del-e9965e63-361a-484e-a5bd-188708379cd9
STEP: Creating configMap with name cm-test-opt-upd-3667b727-63d4-4d6f-80b5-91455ca1e4db
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-e9965e63-361a-484e-a5bd-188708379cd9
STEP: Updating configmap cm-test-opt-upd-3667b727-63d4-4d6f-80b5-91455ca1e4db
STEP: Creating configMap with name cm-test-opt-create-c680cb07-0464-482f-bf65-adca65c16e77
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 22:53:22.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6972" for this suite.

• [SLOW TEST:94.989 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":41,"skipped":472,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-auth] ServiceAccounts
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 22:53:22.070: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: getting the auto-created API token
STEP: reading a file in the container
Aug 26 22:53:26.878: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-2677 pod-service-account-a27f7165-ccaf-401b-b0fb-c60179373b84 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token'
STEP: reading a file in the container
Aug 26 22:53:27.113: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-2677 pod-service-account-a27f7165-ccaf-401b-b0fb-c60179373b84 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt'
STEP: reading a file in the container
Aug 26 22:53:27.360: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-2677 pod-service-account-a27f7165-ccaf-401b-b0fb-c60179373b84 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
[AfterEach] [sig-auth] ServiceAccounts
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 22:53:27.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-2677" for this suite.

• [SLOW TEST:5.520 seconds]
[sig-auth] ServiceAccounts
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount an API token into pods  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","total":278,"completed":42,"skipped":533,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 22:53:27.591: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of same group and version but different kinds [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation
Aug 26 22:53:27.950: INFO: >>> kubeConfig: /root/.kube/config
Aug 26 22:53:30.482: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 22:53:41.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-7039" for this suite.

• [SLOW TEST:13.471 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":278,"completed":43,"skipped":544,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl version 
  should check is all data is printed  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 22:53:41.063: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should check is all data is printed  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 26 22:53:41.123: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Aug 26 22:53:41.261: INFO: stderr: ""
Aug 26 22:53:41.261: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.11\", GitCommit:\"ea5f00d93211b7c80247bf607cfa422ad6fb5347\", GitTreeState:\"clean\", BuildDate:\"2020-08-13T15:20:25Z\", GoVersion:\"go1.13.15\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.5\", GitCommit:\"e0fccafd69541e3750d460ba0f9743b90336f24f\", GitTreeState:\"clean\", BuildDate:\"2020-05-01T02:11:15Z\", GoVersion:\"go1.13.9\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 22:53:41.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6240" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed  [Conformance]","total":278,"completed":44,"skipped":559,"failed":0}
SSS
------------------------------
[sig-network] DNS 
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 22:53:41.268: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6967 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-6967;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6967 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-6967;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6967.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-6967.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6967.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-6967.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6967.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-6967.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6967.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-6967.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6967.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-6967.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6967.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-6967.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6967.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 51.53.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.53.51_udp@PTR;check="$$(dig +tcp +noall +answer +search 51.53.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.53.51_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6967 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-6967;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6967 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-6967;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6967.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-6967.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6967.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-6967.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6967.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-6967.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6967.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-6967.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6967.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-6967.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6967.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-6967.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6967.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 51.53.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.53.51_udp@PTR;check="$$(dig +tcp +noall +answer +search 51.53.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.53.51_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 26 22:53:49.647: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-6967/dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef: the server could not find the requested resource (get pods dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef)
Aug 26 22:53:49.650: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-6967/dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef: the server could not find the requested resource (get pods dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef)
Aug 26 22:53:49.652: INFO: Unable to read wheezy_udp@dns-test-service.dns-6967 from pod dns-6967/dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef: the server could not find the requested resource (get pods dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef)
Aug 26 22:53:49.655: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6967 from pod dns-6967/dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef: the server could not find the requested resource (get pods dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef)
Aug 26 22:53:49.658: INFO: Unable to read wheezy_udp@dns-test-service.dns-6967.svc from pod dns-6967/dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef: the server could not find the requested resource (get pods dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef)
Aug 26 22:53:49.660: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6967.svc from pod dns-6967/dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef: the server could not find the requested resource (get pods dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef)
Aug 26 22:53:49.663: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6967.svc from pod dns-6967/dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef: the server could not find the requested resource (get pods dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef)
Aug 26 22:53:49.665: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6967.svc from pod dns-6967/dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef: the server could not find the requested resource (get pods dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef)
Aug 26 22:53:49.772: INFO: Unable to read jessie_udp@dns-test-service from pod dns-6967/dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef: the server could not find the requested resource (get pods dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef)
Aug 26 22:53:49.775: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-6967/dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef: the server could not find the requested resource (get pods dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef)
Aug 26 22:53:49.778: INFO: Unable to read jessie_udp@dns-test-service.dns-6967 from pod dns-6967/dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef: the server could not find the requested resource (get pods dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef)
Aug 26 22:53:49.795: INFO: Unable to read jessie_tcp@dns-test-service.dns-6967 from pod dns-6967/dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef: the server could not find the requested resource (get pods dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef)
Aug 26 22:53:49.808: INFO: Unable to read jessie_udp@dns-test-service.dns-6967.svc from pod dns-6967/dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef: the server could not find the requested resource (get pods dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef)
Aug 26 22:53:49.813: INFO: Unable to read jessie_tcp@dns-test-service.dns-6967.svc from pod dns-6967/dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef: the server could not find the requested resource (get pods dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef)
Aug 26 22:53:49.817: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6967.svc from pod dns-6967/dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef: the server could not find the requested resource (get pods dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef)
Aug 26 22:53:49.820: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6967.svc from pod dns-6967/dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef: the server could not find the requested resource (get pods dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef)
Aug 26 22:53:49.862: INFO: Lookups using dns-6967/dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-6967 wheezy_tcp@dns-test-service.dns-6967 wheezy_udp@dns-test-service.dns-6967.svc wheezy_tcp@dns-test-service.dns-6967.svc wheezy_udp@_http._tcp.dns-test-service.dns-6967.svc wheezy_tcp@_http._tcp.dns-test-service.dns-6967.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6967 jessie_tcp@dns-test-service.dns-6967 jessie_udp@dns-test-service.dns-6967.svc jessie_tcp@dns-test-service.dns-6967.svc jessie_udp@_http._tcp.dns-test-service.dns-6967.svc jessie_tcp@_http._tcp.dns-test-service.dns-6967.svc]

Aug 26 22:53:54.883: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-6967/dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef: the server could not find the requested resource (get pods dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef)
Aug 26 22:53:54.886: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-6967/dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef: the server could not find the requested resource (get pods dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef)
Aug 26 22:53:54.889: INFO: Unable to read wheezy_udp@dns-test-service.dns-6967 from pod dns-6967/dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef: the server could not find the requested resource (get pods dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef)
Aug 26 22:53:54.892: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6967 from pod dns-6967/dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef: the server could not find the requested resource (get pods dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef)
Aug 26 22:53:54.894: INFO: Unable to read wheezy_udp@dns-test-service.dns-6967.svc from pod dns-6967/dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef: the server could not find the requested resource (get pods dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef)
Aug 26 22:53:54.897: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6967.svc from pod dns-6967/dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef: the server could not find the requested resource (get pods dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef)
Aug 26 22:53:54.899: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6967.svc from pod dns-6967/dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef: the server could not find the requested resource (get pods dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef)
Aug 26 22:53:54.901: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6967.svc from pod dns-6967/dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef: the server could not find the requested resource (get pods dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef)
Aug 26 22:53:54.923: INFO: Unable to read jessie_udp@dns-test-service from pod dns-6967/dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef: the server could not find the requested resource (get pods dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef)
Aug 26 22:53:54.925: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-6967/dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef: the server could not find the requested resource (get pods dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef)
Aug 26 22:53:54.928: INFO: Unable to read jessie_udp@dns-test-service.dns-6967 from pod dns-6967/dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef: the server could not find the requested resource (get pods dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef)
Aug 26 22:53:54.930: INFO: Unable to read jessie_tcp@dns-test-service.dns-6967 from pod dns-6967/dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef: the server could not find the requested resource (get pods dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef)
Aug 26 22:53:54.933: INFO: Unable to read jessie_udp@dns-test-service.dns-6967.svc from pod dns-6967/dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef: the server could not find the requested resource (get pods dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef)
Aug 26 22:53:54.935: INFO: Unable to read jessie_tcp@dns-test-service.dns-6967.svc from pod dns-6967/dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef: the server could not find the requested resource (get pods dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef)
Aug 26 22:53:54.938: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6967.svc from pod dns-6967/dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef: the server could not find the requested resource (get pods dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef)
Aug 26 22:53:54.941: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6967.svc from pod dns-6967/dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef: the server could not find the requested resource (get pods dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef)
Aug 26 22:53:54.958: INFO: Lookups using dns-6967/dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-6967 wheezy_tcp@dns-test-service.dns-6967 wheezy_udp@dns-test-service.dns-6967.svc wheezy_tcp@dns-test-service.dns-6967.svc wheezy_udp@_http._tcp.dns-test-service.dns-6967.svc wheezy_tcp@_http._tcp.dns-test-service.dns-6967.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6967 jessie_tcp@dns-test-service.dns-6967 jessie_udp@dns-test-service.dns-6967.svc jessie_tcp@dns-test-service.dns-6967.svc jessie_udp@_http._tcp.dns-test-service.dns-6967.svc jessie_tcp@_http._tcp.dns-test-service.dns-6967.svc]

Aug 26 22:54:00.015: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-6967/dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef: the server could not find the requested resource (get pods dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef)
Aug 26 22:54:00.086: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-6967/dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef: the server could not find the requested resource (get pods dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef)
Aug 26 22:54:00.090: INFO: Unable to read wheezy_udp@dns-test-service.dns-6967 from pod dns-6967/dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef: the server could not find the requested resource (get pods dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef)
Aug 26 22:54:00.092: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6967 from pod dns-6967/dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef: the server could not find the requested resource (get pods dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef)
Aug 26 22:54:00.095: INFO: Unable to read wheezy_udp@dns-test-service.dns-6967.svc from pod dns-6967/dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef: the server could not find the requested resource (get pods dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef)
Aug 26 22:54:00.098: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6967.svc from pod dns-6967/dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef: the server could not find the requested resource (get pods dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef)
Aug 26 22:54:00.111: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6967.svc from pod dns-6967/dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef: the server could not find the requested resource (get pods dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef)
Aug 26 22:54:00.170: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6967.svc from pod dns-6967/dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef: the server could not find the requested resource (get pods dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef)
Aug 26 22:54:00.212: INFO: Unable to read jessie_udp@dns-test-service from pod dns-6967/dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef: the server could not find the requested resource (get pods dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef)
Aug 26 22:54:00.214: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-6967/dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef: the server could not find the requested resource (get pods dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef)
Aug 26 22:54:00.216: INFO: Unable to read jessie_udp@dns-test-service.dns-6967 from pod dns-6967/dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef: the server could not find the requested resource (get pods dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef)
Aug 26 22:54:00.219: INFO: Unable to read jessie_tcp@dns-test-service.dns-6967 from pod dns-6967/dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef: the server could not find the requested resource (get pods dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef)
Aug 26 22:54:00.221: INFO: Unable to read jessie_udp@dns-test-service.dns-6967.svc from pod dns-6967/dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef: the server could not find the requested resource (get pods dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef)
Aug 26 22:54:00.223: INFO: Unable to read jessie_tcp@dns-test-service.dns-6967.svc from pod dns-6967/dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef: the server could not find the requested resource (get pods dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef)
Aug 26 22:54:00.226: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6967.svc from pod dns-6967/dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef: the server could not find the requested resource (get pods dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef)
Aug 26 22:54:00.228: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6967.svc from pod dns-6967/dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef: the server could not find the requested resource (get pods dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef)
Aug 26 22:54:00.244: INFO: Lookups using dns-6967/dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-6967 wheezy_tcp@dns-test-service.dns-6967 wheezy_udp@dns-test-service.dns-6967.svc wheezy_tcp@dns-test-service.dns-6967.svc wheezy_udp@_http._tcp.dns-test-service.dns-6967.svc wheezy_tcp@_http._tcp.dns-test-service.dns-6967.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6967 jessie_tcp@dns-test-service.dns-6967 jessie_udp@dns-test-service.dns-6967.svc jessie_tcp@dns-test-service.dns-6967.svc jessie_udp@_http._tcp.dns-test-service.dns-6967.svc jessie_tcp@_http._tcp.dns-test-service.dns-6967.svc]

Aug 26 22:54:04.867: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-6967/dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef: the server could not find the requested resource (get pods dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef)
Aug 26 22:54:04.870: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-6967/dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef: the server could not find the requested resource (get pods dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef)
Aug 26 22:54:04.873: INFO: Unable to read wheezy_udp@dns-test-service.dns-6967 from pod dns-6967/dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef: the server could not find the requested resource (get pods dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef)
Aug 26 22:54:04.877: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6967 from pod dns-6967/dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef: the server could not find the requested resource (get pods dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef)
Aug 26 22:54:04.880: INFO: Unable to read wheezy_udp@dns-test-service.dns-6967.svc from pod dns-6967/dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef: the server could not find the requested resource (get pods dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef)
Aug 26 22:54:04.883: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6967.svc from pod dns-6967/dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef: the server could not find the requested resource (get pods dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef)
Aug 26 22:54:04.886: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6967.svc from pod dns-6967/dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef: the server could not find the requested resource (get pods dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef)
Aug 26 22:54:04.889: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6967.svc from pod dns-6967/dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef: the server could not find the requested resource (get pods dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef)
Aug 26 22:54:04.905: INFO: Unable to read jessie_udp@dns-test-service from pod dns-6967/dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef: the server could not find the requested resource (get pods dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef)
Aug 26 22:54:04.907: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-6967/dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef: the server could not find the requested resource (get pods dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef)
Aug 26 22:54:04.909: INFO: Unable to read jessie_udp@dns-test-service.dns-6967 from pod dns-6967/dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef: the server could not find the requested resource (get pods dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef)
Aug 26 22:54:04.911: INFO: Unable to read jessie_tcp@dns-test-service.dns-6967 from pod dns-6967/dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef: the server could not find the requested resource (get pods dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef)
Aug 26 22:54:04.914: INFO: Unable to read jessie_udp@dns-test-service.dns-6967.svc from pod dns-6967/dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef: the server could not find the requested resource (get pods dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef)
Aug 26 22:54:04.916: INFO: Unable to read jessie_tcp@dns-test-service.dns-6967.svc from pod dns-6967/dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef: the server could not find the requested resource (get pods dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef)
Aug 26 22:54:04.918: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6967.svc from pod dns-6967/dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef: the server could not find the requested resource (get pods dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef)
Aug 26 22:54:04.921: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6967.svc from pod dns-6967/dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef: the server could not find the requested resource (get pods dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef)
Aug 26 22:54:04.936: INFO: Lookups using dns-6967/dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-6967 wheezy_tcp@dns-test-service.dns-6967 wheezy_udp@dns-test-service.dns-6967.svc wheezy_tcp@dns-test-service.dns-6967.svc wheezy_udp@_http._tcp.dns-test-service.dns-6967.svc wheezy_tcp@_http._tcp.dns-test-service.dns-6967.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6967 jessie_tcp@dns-test-service.dns-6967 jessie_udp@dns-test-service.dns-6967.svc jessie_tcp@dns-test-service.dns-6967.svc jessie_udp@_http._tcp.dns-test-service.dns-6967.svc jessie_tcp@_http._tcp.dns-test-service.dns-6967.svc]

Aug 26 22:54:09.867: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-6967/dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef: the server could not find the requested resource (get pods dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef)
Aug 26 22:54:09.875: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-6967/dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef: the server could not find the requested resource (get pods dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef)
Aug 26 22:54:09.879: INFO: Unable to read wheezy_udp@dns-test-service.dns-6967 from pod dns-6967/dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef: the server could not find the requested resource (get pods dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef)
Aug 26 22:54:09.882: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6967 from pod dns-6967/dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef: the server could not find the requested resource (get pods dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef)
Aug 26 22:54:09.885: INFO: Unable to read wheezy_udp@dns-test-service.dns-6967.svc from pod dns-6967/dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef: the server could not find the requested resource (get pods dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef)
Aug 26 22:54:09.888: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6967.svc from pod dns-6967/dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef: the server could not find the requested resource (get pods dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef)
Aug 26 22:54:09.892: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6967.svc from pod dns-6967/dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef: the server could not find the requested resource (get pods dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef)
Aug 26 22:54:09.895: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6967.svc from pod dns-6967/dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef: the server could not find the requested resource (get pods dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef)
Aug 26 22:54:09.956: INFO: Unable to read jessie_udp@dns-test-service from pod dns-6967/dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef: the server could not find the requested resource (get pods dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef)
Aug 26 22:54:09.959: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-6967/dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef: the server could not find the requested resource (get pods dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef)
Aug 26 22:54:09.963: INFO: Unable to read jessie_udp@dns-test-service.dns-6967 from pod dns-6967/dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef: the server could not find the requested resource (get pods dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef)
Aug 26 22:54:09.966: INFO: Unable to read jessie_tcp@dns-test-service.dns-6967 from pod dns-6967/dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef: the server could not find the requested resource (get pods dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef)
Aug 26 22:54:09.969: INFO: Unable to read jessie_udp@dns-test-service.dns-6967.svc from pod dns-6967/dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef: the server could not find the requested resource (get pods dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef)
Aug 26 22:54:09.972: INFO: Unable to read jessie_tcp@dns-test-service.dns-6967.svc from pod dns-6967/dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef: the server could not find the requested resource (get pods dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef)
Aug 26 22:54:09.975: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6967.svc from pod dns-6967/dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef: the server could not find the requested resource (get pods dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef)
Aug 26 22:54:09.978: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6967.svc from pod dns-6967/dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef: the server could not find the requested resource (get pods dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef)
Aug 26 22:54:10.001: INFO: Lookups using dns-6967/dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-6967 wheezy_tcp@dns-test-service.dns-6967 wheezy_udp@dns-test-service.dns-6967.svc wheezy_tcp@dns-test-service.dns-6967.svc wheezy_udp@_http._tcp.dns-test-service.dns-6967.svc wheezy_tcp@_http._tcp.dns-test-service.dns-6967.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6967 jessie_tcp@dns-test-service.dns-6967 jessie_udp@dns-test-service.dns-6967.svc jessie_tcp@dns-test-service.dns-6967.svc jessie_udp@_http._tcp.dns-test-service.dns-6967.svc jessie_tcp@_http._tcp.dns-test-service.dns-6967.svc]

Aug 26 22:54:14.867: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-6967/dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef: the server could not find the requested resource (get pods dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef)
Aug 26 22:54:14.871: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-6967/dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef: the server could not find the requested resource (get pods dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef)
Aug 26 22:54:14.875: INFO: Unable to read wheezy_udp@dns-test-service.dns-6967 from pod dns-6967/dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef: the server could not find the requested resource (get pods dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef)
Aug 26 22:54:14.879: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6967 from pod dns-6967/dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef: the server could not find the requested resource (get pods dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef)
Aug 26 22:54:14.882: INFO: Unable to read wheezy_udp@dns-test-service.dns-6967.svc from pod dns-6967/dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef: the server could not find the requested resource (get pods dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef)
Aug 26 22:54:14.885: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6967.svc from pod dns-6967/dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef: the server could not find the requested resource (get pods dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef)
Aug 26 22:54:14.889: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6967.svc from pod dns-6967/dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef: the server could not find the requested resource (get pods dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef)
Aug 26 22:54:14.892: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6967.svc from pod dns-6967/dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef: the server could not find the requested resource (get pods dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef)
Aug 26 22:54:14.913: INFO: Unable to read jessie_udp@dns-test-service from pod dns-6967/dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef: the server could not find the requested resource (get pods dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef)
Aug 26 22:54:14.916: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-6967/dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef: the server could not find the requested resource (get pods dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef)
Aug 26 22:54:14.919: INFO: Unable to read jessie_udp@dns-test-service.dns-6967 from pod dns-6967/dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef: the server could not find the requested resource (get pods dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef)
Aug 26 22:54:14.922: INFO: Unable to read jessie_tcp@dns-test-service.dns-6967 from pod dns-6967/dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef: the server could not find the requested resource (get pods dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef)
Aug 26 22:54:14.925: INFO: Unable to read jessie_udp@dns-test-service.dns-6967.svc from pod dns-6967/dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef: the server could not find the requested resource (get pods dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef)
Aug 26 22:54:14.928: INFO: Unable to read jessie_tcp@dns-test-service.dns-6967.svc from pod dns-6967/dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef: the server could not find the requested resource (get pods dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef)
Aug 26 22:54:14.931: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6967.svc from pod dns-6967/dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef: the server could not find the requested resource (get pods dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef)
Aug 26 22:54:14.934: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6967.svc from pod dns-6967/dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef: the server could not find the requested resource (get pods dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef)
Aug 26 22:54:14.956: INFO: Lookups using dns-6967/dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-6967 wheezy_tcp@dns-test-service.dns-6967 wheezy_udp@dns-test-service.dns-6967.svc wheezy_tcp@dns-test-service.dns-6967.svc wheezy_udp@_http._tcp.dns-test-service.dns-6967.svc wheezy_tcp@_http._tcp.dns-test-service.dns-6967.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6967 jessie_tcp@dns-test-service.dns-6967 jessie_udp@dns-test-service.dns-6967.svc jessie_tcp@dns-test-service.dns-6967.svc jessie_udp@_http._tcp.dns-test-service.dns-6967.svc jessie_tcp@_http._tcp.dns-test-service.dns-6967.svc]

Aug 26 22:54:19.925: INFO: DNS probes using dns-6967/dns-test-022eadda-3236-4059-b4a7-6d68edc4c9ef succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 22:54:20.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-6967" for this suite.

• [SLOW TEST:39.465 seconds]
[sig-network] DNS
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":278,"completed":45,"skipped":562,"failed":0}
S
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 22:54:20.733: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-0e27304e-d7af-4219-b395-67e66b827fc1
STEP: Creating a pod to test consume configMaps
Aug 26 22:54:20.921: INFO: Waiting up to 5m0s for pod "pod-configmaps-caf0938f-171b-49ce-a809-aee70deeeb62" in namespace "configmap-9516" to be "success or failure"
Aug 26 22:54:20.930: INFO: Pod "pod-configmaps-caf0938f-171b-49ce-a809-aee70deeeb62": Phase="Pending", Reason="", readiness=false. Elapsed: 9.55721ms
Aug 26 22:54:23.026: INFO: Pod "pod-configmaps-caf0938f-171b-49ce-a809-aee70deeeb62": Phase="Pending", Reason="", readiness=false. Elapsed: 2.105125549s
Aug 26 22:54:25.349: INFO: Pod "pod-configmaps-caf0938f-171b-49ce-a809-aee70deeeb62": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.428728226s
STEP: Saw pod success
Aug 26 22:54:25.349: INFO: Pod "pod-configmaps-caf0938f-171b-49ce-a809-aee70deeeb62" satisfied condition "success or failure"
Aug 26 22:54:25.354: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-caf0938f-171b-49ce-a809-aee70deeeb62 container configmap-volume-test: 
STEP: delete the pod
Aug 26 22:54:25.376: INFO: Waiting for pod pod-configmaps-caf0938f-171b-49ce-a809-aee70deeeb62 to disappear
Aug 26 22:54:25.391: INFO: Pod pod-configmaps-caf0938f-171b-49ce-a809-aee70deeeb62 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 22:54:25.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9516" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":46,"skipped":563,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with pruning [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 22:54:25.420: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 26 22:54:26.331: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 26 22:54:28.341: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734079266, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734079266, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734079266, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734079266, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 26 22:54:31.567: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource with pruning [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 26 22:54:31.587: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-9178-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource that should be mutated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 22:54:32.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-9024" for this suite.
STEP: Destroying namespace "webhook-9024-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:7.587 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with pruning [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":278,"completed":47,"skipped":570,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 22:54:33.007: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
Aug 26 22:54:33.240: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 22:54:41.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-3648" for this suite.

• [SLOW TEST:8.839 seconds]
[k8s.io] InitContainer [NodeConformance]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":278,"completed":48,"skipped":586,"failed":0}
SSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 22:54:41.847: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0666 on tmpfs
Aug 26 22:54:41.957: INFO: Waiting up to 5m0s for pod "pod-00938634-6b2c-47a4-af08-ae5547a23798" in namespace "emptydir-2099" to be "success or failure"
Aug 26 22:54:41.961: INFO: Pod "pod-00938634-6b2c-47a4-af08-ae5547a23798": Phase="Pending", Reason="", readiness=false. Elapsed: 3.44618ms
Aug 26 22:54:43.965: INFO: Pod "pod-00938634-6b2c-47a4-af08-ae5547a23798": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007522238s
Aug 26 22:54:45.968: INFO: Pod "pod-00938634-6b2c-47a4-af08-ae5547a23798": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010856311s
Aug 26 22:54:47.972: INFO: Pod "pod-00938634-6b2c-47a4-af08-ae5547a23798": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014952701s
STEP: Saw pod success
Aug 26 22:54:47.973: INFO: Pod "pod-00938634-6b2c-47a4-af08-ae5547a23798" satisfied condition "success or failure"
Aug 26 22:54:47.976: INFO: Trying to get logs from node jerma-worker2 pod pod-00938634-6b2c-47a4-af08-ae5547a23798 container test-container: 
STEP: delete the pod
Aug 26 22:54:48.081: INFO: Waiting for pod pod-00938634-6b2c-47a4-af08-ae5547a23798 to disappear
Aug 26 22:54:48.084: INFO: Pod pod-00938634-6b2c-47a4-af08-ae5547a23798 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 22:54:48.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2099" for this suite.

• [SLOW TEST:6.242 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":49,"skipped":592,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 22:54:48.090: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0777 on node default medium
Aug 26 22:54:48.152: INFO: Waiting up to 5m0s for pod "pod-eae6f49c-36f3-4024-85c7-3865080d6d44" in namespace "emptydir-3097" to be "success or failure"
Aug 26 22:54:48.169: INFO: Pod "pod-eae6f49c-36f3-4024-85c7-3865080d6d44": Phase="Pending", Reason="", readiness=false. Elapsed: 16.924102ms
Aug 26 22:54:50.181: INFO: Pod "pod-eae6f49c-36f3-4024-85c7-3865080d6d44": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028712332s
Aug 26 22:54:52.194: INFO: Pod "pod-eae6f49c-36f3-4024-85c7-3865080d6d44": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.042420148s
STEP: Saw pod success
Aug 26 22:54:52.194: INFO: Pod "pod-eae6f49c-36f3-4024-85c7-3865080d6d44" satisfied condition "success or failure"
Aug 26 22:54:52.197: INFO: Trying to get logs from node jerma-worker2 pod pod-eae6f49c-36f3-4024-85c7-3865080d6d44 container test-container: 
STEP: delete the pod
Aug 26 22:54:52.211: INFO: Waiting for pod pod-eae6f49c-36f3-4024-85c7-3865080d6d44 to disappear
Aug 26 22:54:52.228: INFO: Pod pod-eae6f49c-36f3-4024-85c7-3865080d6d44 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 22:54:52.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3097" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":50,"skipped":671,"failed":0}
SS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 22:54:52.234: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-8562a641-7cbe-45d0-bb4f-f7503d5e2ff4
STEP: Creating a pod to test consume configMaps
Aug 26 22:54:52.625: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b3f629b9-720f-47b4-bd1f-6f6baceae2ba" in namespace "projected-1915" to be "success or failure"
Aug 26 22:54:52.630: INFO: Pod "pod-projected-configmaps-b3f629b9-720f-47b4-bd1f-6f6baceae2ba": Phase="Pending", Reason="", readiness=false. Elapsed: 4.0433ms
Aug 26 22:54:54.633: INFO: Pod "pod-projected-configmaps-b3f629b9-720f-47b4-bd1f-6f6baceae2ba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0076651s
Aug 26 22:54:56.734: INFO: Pod "pod-projected-configmaps-b3f629b9-720f-47b4-bd1f-6f6baceae2ba": Phase="Pending", Reason="", readiness=false. Elapsed: 4.108087098s
Aug 26 22:54:58.738: INFO: Pod "pod-projected-configmaps-b3f629b9-720f-47b4-bd1f-6f6baceae2ba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.112516938s
STEP: Saw pod success
Aug 26 22:54:58.738: INFO: Pod "pod-projected-configmaps-b3f629b9-720f-47b4-bd1f-6f6baceae2ba" satisfied condition "success or failure"
Aug 26 22:54:58.741: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-b3f629b9-720f-47b4-bd1f-6f6baceae2ba container projected-configmap-volume-test: 
STEP: delete the pod
Aug 26 22:54:58.759: INFO: Waiting for pod pod-projected-configmaps-b3f629b9-720f-47b4-bd1f-6f6baceae2ba to disappear
Aug 26 22:54:58.764: INFO: Pod pod-projected-configmaps-b3f629b9-720f-47b4-bd1f-6f6baceae2ba no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 22:54:58.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1915" for this suite.

• [SLOW TEST:6.535 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":51,"skipped":673,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 22:54:58.770: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-8e399030-beff-4e46-870b-bc1a40e309be
STEP: Creating a pod to test consume configMaps
Aug 26 22:54:58.879: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b1a7c70b-1e02-4d93-91da-2bc5fb955e8c" in namespace "projected-1573" to be "success or failure"
Aug 26 22:54:58.890: INFO: Pod "pod-projected-configmaps-b1a7c70b-1e02-4d93-91da-2bc5fb955e8c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.611836ms
Aug 26 22:55:00.894: INFO: Pod "pod-projected-configmaps-b1a7c70b-1e02-4d93-91da-2bc5fb955e8c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014548882s
Aug 26 22:55:02.898: INFO: Pod "pod-projected-configmaps-b1a7c70b-1e02-4d93-91da-2bc5fb955e8c": Phase="Running", Reason="", readiness=true. Elapsed: 4.018684577s
Aug 26 22:55:04.901: INFO: Pod "pod-projected-configmaps-b1a7c70b-1e02-4d93-91da-2bc5fb955e8c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.022307659s
STEP: Saw pod success
Aug 26 22:55:04.901: INFO: Pod "pod-projected-configmaps-b1a7c70b-1e02-4d93-91da-2bc5fb955e8c" satisfied condition "success or failure"
Aug 26 22:55:04.905: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-b1a7c70b-1e02-4d93-91da-2bc5fb955e8c container projected-configmap-volume-test: 
STEP: delete the pod
Aug 26 22:55:04.941: INFO: Waiting for pod pod-projected-configmaps-b1a7c70b-1e02-4d93-91da-2bc5fb955e8c to disappear
Aug 26 22:55:04.956: INFO: Pod pod-projected-configmaps-b1a7c70b-1e02-4d93-91da-2bc5fb955e8c no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 22:55:04.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1573" for this suite.

• [SLOW TEST:6.196 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":52,"skipped":704,"failed":0}
SSSS
------------------------------
[sig-storage] EmptyDir volumes 
  pod should support shared volumes between containers [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 22:55:04.966: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] pod should support shared volumes between containers [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating Pod
STEP: Waiting for the pod running
STEP: Geting the pod
STEP: Reading file content from the nginx-container
Aug 26 22:55:11.327: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-7390 PodName:pod-sharedvolume-b36a85bd-6b70-4d93-bf2c-39c3e029a582 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 26 22:55:11.327: INFO: >>> kubeConfig: /root/.kube/config
I0826 22:55:11.347676       6 log.go:172] (0xc00060bd90) (0xc001296dc0) Create stream
I0826 22:55:11.347698       6 log.go:172] (0xc00060bd90) (0xc001296dc0) Stream added, broadcasting: 1
I0826 22:55:11.348978       6 log.go:172] (0xc00060bd90) Reply frame received for 1
I0826 22:55:11.349024       6 log.go:172] (0xc00060bd90) (0xc000dfa6e0) Create stream
I0826 22:55:11.349037       6 log.go:172] (0xc00060bd90) (0xc000dfa6e0) Stream added, broadcasting: 3
I0826 22:55:11.349747       6 log.go:172] (0xc00060bd90) Reply frame received for 3
I0826 22:55:11.349778       6 log.go:172] (0xc00060bd90) (0xc0012e0320) Create stream
I0826 22:55:11.349790       6 log.go:172] (0xc00060bd90) (0xc0012e0320) Stream added, broadcasting: 5
I0826 22:55:11.350412       6 log.go:172] (0xc00060bd90) Reply frame received for 5
I0826 22:55:11.399268       6 log.go:172] (0xc00060bd90) Data frame received for 5
I0826 22:55:11.399292       6 log.go:172] (0xc0012e0320) (5) Data frame handling
I0826 22:55:11.399311       6 log.go:172] (0xc00060bd90) Data frame received for 3
I0826 22:55:11.399318       6 log.go:172] (0xc000dfa6e0) (3) Data frame handling
I0826 22:55:11.399331       6 log.go:172] (0xc000dfa6e0) (3) Data frame sent
I0826 22:55:11.399339       6 log.go:172] (0xc00060bd90) Data frame received for 3
I0826 22:55:11.399347       6 log.go:172] (0xc000dfa6e0) (3) Data frame handling
I0826 22:55:11.400929       6 log.go:172] (0xc00060bd90) Data frame received for 1
I0826 22:55:11.400964       6 log.go:172] (0xc001296dc0) (1) Data frame handling
I0826 22:55:11.401004       6 log.go:172] (0xc001296dc0) (1) Data frame sent
I0826 22:55:11.401032       6 log.go:172] (0xc00060bd90) (0xc001296dc0) Stream removed, broadcasting: 1
I0826 22:55:11.401093       6 log.go:172] (0xc00060bd90) Go away received
I0826 22:55:11.401144       6 log.go:172] (0xc00060bd90) (0xc001296dc0) Stream removed, broadcasting: 1
I0826 22:55:11.401165       6 log.go:172] (0xc00060bd90) (0xc000dfa6e0) Stream removed, broadcasting: 3
I0826 22:55:11.401179       6 log.go:172] (0xc00060bd90) (0xc0012e0320) Stream removed, broadcasting: 5
Aug 26 22:55:11.401: INFO: Exec stderr: ""
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 22:55:11.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7390" for this suite.

• [SLOW TEST:6.441 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  pod should support shared volumes between containers [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":278,"completed":53,"skipped":708,"failed":0}
S
------------------------------
[sig-cli] Kubectl client Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 22:55:11.407: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Kubectl logs
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358
STEP: creating an pod
Aug 26 22:55:11.477: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run logs-generator --generator=run-pod/v1 --image=gcr.io/kubernetes-e2e-test-images/agnhost:2.8 --namespace=kubectl-7230 -- logs-generator --log-lines-total 100 --run-duration 20s'
Aug 26 22:55:11.592: INFO: stderr: ""
Aug 26 22:55:11.592: INFO: stdout: "pod/logs-generator created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Waiting for log generator to start.
Aug 26 22:55:11.592: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator]
Aug 26 22:55:11.592: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-7230" to be "running and ready, or succeeded"
Aug 26 22:55:11.599: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 6.796128ms
Aug 26 22:55:13.679: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.087372672s
Aug 26 22:55:15.683: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.091181422s
Aug 26 22:55:15.683: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded"
Aug 26 22:55:15.683: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator]
STEP: checking for a matching strings
Aug 26 22:55:15.683: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-7230'
Aug 26 22:55:15.849: INFO: stderr: ""
Aug 26 22:55:15.849: INFO: stdout: "I0826 22:55:14.701981       1 logs_generator.go:76] 0 GET /api/v1/namespaces/ns/pods/5dsq 299\nI0826 22:55:14.902153       1 logs_generator.go:76] 1 GET /api/v1/namespaces/kube-system/pods/5q7x 360\nI0826 22:55:15.102136       1 logs_generator.go:76] 2 GET /api/v1/namespaces/default/pods/6tnj 206\nI0826 22:55:15.302166       1 logs_generator.go:76] 3 GET /api/v1/namespaces/kube-system/pods/nv7 367\nI0826 22:55:15.502236       1 logs_generator.go:76] 4 PUT /api/v1/namespaces/default/pods/sgk5 258\nI0826 22:55:15.702178       1 logs_generator.go:76] 5 GET /api/v1/namespaces/kube-system/pods/5fdp 238\n"
STEP: limiting log lines
Aug 26 22:55:15.849: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-7230 --tail=1'
Aug 26 22:55:15.959: INFO: stderr: ""
Aug 26 22:55:15.959: INFO: stdout: "I0826 22:55:15.902116       1 logs_generator.go:76] 6 GET /api/v1/namespaces/default/pods/zpp 582\n"
Aug 26 22:55:15.959: INFO: got output "I0826 22:55:15.902116       1 logs_generator.go:76] 6 GET /api/v1/namespaces/default/pods/zpp 582\n"
STEP: limiting log bytes
Aug 26 22:55:15.959: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-7230 --limit-bytes=1'
Aug 26 22:55:16.152: INFO: stderr: ""
Aug 26 22:55:16.152: INFO: stdout: "I"
Aug 26 22:55:16.152: INFO: got output "I"
STEP: exposing timestamps
Aug 26 22:55:16.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-7230 --tail=1 --timestamps'
Aug 26 22:55:16.267: INFO: stderr: ""
Aug 26 22:55:16.267: INFO: stdout: "2020-08-26T22:55:16.102267332Z I0826 22:55:16.102137       1 logs_generator.go:76] 7 POST /api/v1/namespaces/ns/pods/f2j6 522\n"
Aug 26 22:55:16.267: INFO: got output "2020-08-26T22:55:16.102267332Z I0826 22:55:16.102137       1 logs_generator.go:76] 7 POST /api/v1/namespaces/ns/pods/f2j6 522\n"
STEP: restricting to a time range
Aug 26 22:55:18.767: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-7230 --since=1s'
Aug 26 22:55:18.891: INFO: stderr: ""
Aug 26 22:55:18.891: INFO: stdout: "I0826 22:55:17.902206       1 logs_generator.go:76] 16 POST /api/v1/namespaces/kube-system/pods/cd2 249\nI0826 22:55:18.102184       1 logs_generator.go:76] 17 GET /api/v1/namespaces/kube-system/pods/n9r 320\nI0826 22:55:18.302180       1 logs_generator.go:76] 18 PUT /api/v1/namespaces/ns/pods/2kl7 505\nI0826 22:55:18.502173       1 logs_generator.go:76] 19 POST /api/v1/namespaces/ns/pods/fls 291\nI0826 22:55:18.702193       1 logs_generator.go:76] 20 POST /api/v1/namespaces/default/pods/pwl 263\n"
Aug 26 22:55:18.891: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-7230 --since=24h'
Aug 26 22:55:19.001: INFO: stderr: ""
Aug 26 22:55:19.001: INFO: stdout: "I0826 22:55:14.701981       1 logs_generator.go:76] 0 GET /api/v1/namespaces/ns/pods/5dsq 299\nI0826 22:55:14.902153       1 logs_generator.go:76] 1 GET /api/v1/namespaces/kube-system/pods/5q7x 360\nI0826 22:55:15.102136       1 logs_generator.go:76] 2 GET /api/v1/namespaces/default/pods/6tnj 206\nI0826 22:55:15.302166       1 logs_generator.go:76] 3 GET /api/v1/namespaces/kube-system/pods/nv7 367\nI0826 22:55:15.502236       1 logs_generator.go:76] 4 PUT /api/v1/namespaces/default/pods/sgk5 258\nI0826 22:55:15.702178       1 logs_generator.go:76] 5 GET /api/v1/namespaces/kube-system/pods/5fdp 238\nI0826 22:55:15.902116       1 logs_generator.go:76] 6 GET /api/v1/namespaces/default/pods/zpp 582\nI0826 22:55:16.102137       1 logs_generator.go:76] 7 POST /api/v1/namespaces/ns/pods/f2j6 522\nI0826 22:55:16.302199       1 logs_generator.go:76] 8 GET /api/v1/namespaces/kube-system/pods/xbd 374\nI0826 22:55:16.502165       1 logs_generator.go:76] 9 PUT /api/v1/namespaces/ns/pods/6ht 456\nI0826 22:55:16.702179       1 logs_generator.go:76] 10 GET /api/v1/namespaces/default/pods/ftng 260\nI0826 22:55:16.902173       1 logs_generator.go:76] 11 POST /api/v1/namespaces/ns/pods/9jsw 281\nI0826 22:55:17.102189       1 logs_generator.go:76] 12 POST /api/v1/namespaces/kube-system/pods/lsq 586\nI0826 22:55:17.302206       1 logs_generator.go:76] 13 GET /api/v1/namespaces/kube-system/pods/v7m 554\nI0826 22:55:17.502175       1 logs_generator.go:76] 14 POST /api/v1/namespaces/ns/pods/hg9z 217\nI0826 22:55:17.702181       1 logs_generator.go:76] 15 GET /api/v1/namespaces/ns/pods/rf98 434\nI0826 22:55:17.902206       1 logs_generator.go:76] 16 POST /api/v1/namespaces/kube-system/pods/cd2 249\nI0826 22:55:18.102184       1 logs_generator.go:76] 17 GET /api/v1/namespaces/kube-system/pods/n9r 320\nI0826 22:55:18.302180       1 logs_generator.go:76] 18 PUT /api/v1/namespaces/ns/pods/2kl7 505\nI0826 22:55:18.502173       1 logs_generator.go:76] 19 POST /api/v1/namespaces/ns/pods/fls 291\nI0826 22:55:18.702193       1 logs_generator.go:76] 20 POST /api/v1/namespaces/default/pods/pwl 263\nI0826 22:55:18.902179       1 logs_generator.go:76] 21 GET /api/v1/namespaces/default/pods/bcv 243\n"
[AfterEach] Kubectl logs
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364
Aug 26 22:55:19.001: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-7230'
Aug 26 22:55:31.697: INFO: stderr: ""
Aug 26 22:55:31.697: INFO: stdout: "pod \"logs-generator\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 22:55:31.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7230" for this suite.

• [SLOW TEST:20.301 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl logs
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1354
    should be able to retrieve and filter logs  [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","total":278,"completed":54,"skipped":709,"failed":0}
SSSS
------------------------------
[sig-cli] Kubectl client Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 22:55:31.709: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: validating cluster-info
Aug 26 22:55:31.910: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Aug 26 22:55:32.078: INFO: stderr: ""
Aug 26 22:55:32.078: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:37695\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:37695/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 22:55:32.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3827" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info  [Conformance]","total":278,"completed":55,"skipped":713,"failed":0}
SS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support --unix-socket=/path  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 22:55:32.136: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should support --unix-socket=/path  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Starting the proxy
Aug 26 22:55:32.283: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix766201912/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 22:55:32.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8354" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path  [Conformance]","total":278,"completed":56,"skipped":715,"failed":0}
SSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  updates the published spec when one version gets renamed [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 22:55:32.376: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates the published spec when one version gets renamed [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: set up a multi version CRD
Aug 26 22:55:32.508: INFO: >>> kubeConfig: /root/.kube/config
STEP: rename a version
STEP: check the new version name is served
STEP: check the old version name is removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 22:55:48.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-4800" for this suite.

• [SLOW TEST:16.390 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  updates the published spec when one version gets renamed [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":278,"completed":57,"skipped":718,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 22:55:48.766: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 26 22:55:48.897: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bb44dd25-e173-4959-8d3a-f77f2035f906" in namespace "projected-3408" to be "success or failure"
Aug 26 22:55:48.909: INFO: Pod "downwardapi-volume-bb44dd25-e173-4959-8d3a-f77f2035f906": Phase="Pending", Reason="", readiness=false. Elapsed: 11.782434ms
Aug 26 22:55:50.913: INFO: Pod "downwardapi-volume-bb44dd25-e173-4959-8d3a-f77f2035f906": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016102557s
Aug 26 22:55:52.917: INFO: Pod "downwardapi-volume-bb44dd25-e173-4959-8d3a-f77f2035f906": Phase="Running", Reason="", readiness=true. Elapsed: 4.019764367s
Aug 26 22:55:54.920: INFO: Pod "downwardapi-volume-bb44dd25-e173-4959-8d3a-f77f2035f906": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.023482205s
STEP: Saw pod success
Aug 26 22:55:54.920: INFO: Pod "downwardapi-volume-bb44dd25-e173-4959-8d3a-f77f2035f906" satisfied condition "success or failure"
Aug 26 22:55:54.924: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-bb44dd25-e173-4959-8d3a-f77f2035f906 container client-container: 
STEP: delete the pod
Aug 26 22:55:54.963: INFO: Waiting for pod downwardapi-volume-bb44dd25-e173-4959-8d3a-f77f2035f906 to disappear
Aug 26 22:55:54.975: INFO: Pod downwardapi-volume-bb44dd25-e173-4959-8d3a-f77f2035f906 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 22:55:54.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3408" for this suite.

• [SLOW TEST:6.217 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":58,"skipped":741,"failed":0}
SSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 22:55:54.983: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0666 on node default medium
Aug 26 22:55:55.101: INFO: Waiting up to 5m0s for pod "pod-c22edf23-5807-45a1-8157-710066269230" in namespace "emptydir-3590" to be "success or failure"
Aug 26 22:55:55.104: INFO: Pod "pod-c22edf23-5807-45a1-8157-710066269230": Phase="Pending", Reason="", readiness=false. Elapsed: 3.274082ms
Aug 26 22:55:57.327: INFO: Pod "pod-c22edf23-5807-45a1-8157-710066269230": Phase="Pending", Reason="", readiness=false. Elapsed: 2.226178216s
Aug 26 22:55:59.434: INFO: Pod "pod-c22edf23-5807-45a1-8157-710066269230": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.333320821s
STEP: Saw pod success
Aug 26 22:55:59.434: INFO: Pod "pod-c22edf23-5807-45a1-8157-710066269230" satisfied condition "success or failure"
Aug 26 22:55:59.437: INFO: Trying to get logs from node jerma-worker pod pod-c22edf23-5807-45a1-8157-710066269230 container test-container: 
STEP: delete the pod
Aug 26 22:55:59.616: INFO: Waiting for pod pod-c22edf23-5807-45a1-8157-710066269230 to disappear
Aug 26 22:55:59.657: INFO: Pod pod-c22edf23-5807-45a1-8157-710066269230 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 22:55:59.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3590" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":59,"skipped":750,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 22:55:59.669: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 26 22:55:59.736: INFO: Waiting up to 5m0s for pod "downwardapi-volume-97e42721-a99b-4b05-ba62-fdf1977664ed" in namespace "downward-api-508" to be "success or failure"
Aug 26 22:55:59.775: INFO: Pod "downwardapi-volume-97e42721-a99b-4b05-ba62-fdf1977664ed": Phase="Pending", Reason="", readiness=false. Elapsed: 39.071183ms
Aug 26 22:56:01.829: INFO: Pod "downwardapi-volume-97e42721-a99b-4b05-ba62-fdf1977664ed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.093249443s
Aug 26 22:56:03.836: INFO: Pod "downwardapi-volume-97e42721-a99b-4b05-ba62-fdf1977664ed": Phase="Pending", Reason="", readiness=false. Elapsed: 4.099510312s
Aug 26 22:56:05.840: INFO: Pod "downwardapi-volume-97e42721-a99b-4b05-ba62-fdf1977664ed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.103772585s
STEP: Saw pod success
Aug 26 22:56:05.840: INFO: Pod "downwardapi-volume-97e42721-a99b-4b05-ba62-fdf1977664ed" satisfied condition "success or failure"
Aug 26 22:56:05.843: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-97e42721-a99b-4b05-ba62-fdf1977664ed container client-container: 
STEP: delete the pod
Aug 26 22:56:05.880: INFO: Waiting for pod downwardapi-volume-97e42721-a99b-4b05-ba62-fdf1977664ed to disappear
Aug 26 22:56:05.918: INFO: Pod downwardapi-volume-97e42721-a99b-4b05-ba62-fdf1977664ed no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 22:56:05.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-508" for this suite.

• [SLOW TEST:6.266 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":60,"skipped":761,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 22:56:05.935: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating the pod
Aug 26 22:56:10.739: INFO: Successfully updated pod "annotationupdatecbf56d67-cb38-4a19-9ccb-37fb4f95d3ac"
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 22:56:14.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2174" for this suite.

• [SLOW TEST:8.981 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":61,"skipped":771,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Networking
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 22:56:14.917: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Performing setup for networking test in namespace pod-network-test-574
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Aug 26 22:56:15.019: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Aug 26 22:56:43.186: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.174:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-574 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 26 22:56:43.186: INFO: >>> kubeConfig: /root/.kube/config
I0826 22:56:43.221573       6 log.go:172] (0xc00361ef20) (0xc0016450e0) Create stream
I0826 22:56:43.221624       6 log.go:172] (0xc00361ef20) (0xc0016450e0) Stream added, broadcasting: 1
I0826 22:56:43.223918       6 log.go:172] (0xc00361ef20) Reply frame received for 1
I0826 22:56:43.223976       6 log.go:172] (0xc00361ef20) (0xc001660000) Create stream
I0826 22:56:43.223993       6 log.go:172] (0xc00361ef20) (0xc001660000) Stream added, broadcasting: 3
I0826 22:56:43.225249       6 log.go:172] (0xc00361ef20) Reply frame received for 3
I0826 22:56:43.225316       6 log.go:172] (0xc00361ef20) (0xc001f02c80) Create stream
I0826 22:56:43.225410       6 log.go:172] (0xc00361ef20) (0xc001f02c80) Stream added, broadcasting: 5
I0826 22:56:43.226358       6 log.go:172] (0xc00361ef20) Reply frame received for 5
I0826 22:56:43.302680       6 log.go:172] (0xc00361ef20) Data frame received for 5
I0826 22:56:43.302710       6 log.go:172] (0xc001f02c80) (5) Data frame handling
I0826 22:56:43.302731       6 log.go:172] (0xc00361ef20) Data frame received for 3
I0826 22:56:43.302747       6 log.go:172] (0xc001660000) (3) Data frame handling
I0826 22:56:43.302755       6 log.go:172] (0xc001660000) (3) Data frame sent
I0826 22:56:43.302760       6 log.go:172] (0xc00361ef20) Data frame received for 3
I0826 22:56:43.302765       6 log.go:172] (0xc001660000) (3) Data frame handling
I0826 22:56:43.304610       6 log.go:172] (0xc00361ef20) Data frame received for 1
I0826 22:56:43.304650       6 log.go:172] (0xc0016450e0) (1) Data frame handling
I0826 22:56:43.304665       6 log.go:172] (0xc0016450e0) (1) Data frame sent
I0826 22:56:43.304687       6 log.go:172] (0xc00361ef20) (0xc0016450e0) Stream removed, broadcasting: 1
I0826 22:56:43.304805       6 log.go:172] (0xc00361ef20) Go away received
I0826 22:56:43.305001       6 log.go:172] (0xc00361ef20) (0xc0016450e0) Stream removed, broadcasting: 1
I0826 22:56:43.305037       6 log.go:172] (0xc00361ef20) (0xc001660000) Stream removed, broadcasting: 3
I0826 22:56:43.305073       6 log.go:172] (0xc00361ef20) (0xc001f02c80) Stream removed, broadcasting: 5
Aug 26 22:56:43.305: INFO: Found all expected endpoints: [netserver-0]
Aug 26 22:56:43.308: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.27:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-574 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 26 22:56:43.308: INFO: >>> kubeConfig: /root/.kube/config
I0826 22:56:43.334071       6 log.go:172] (0xc004c62370) (0xc001660780) Create stream
I0826 22:56:43.334104       6 log.go:172] (0xc004c62370) (0xc001660780) Stream added, broadcasting: 1
I0826 22:56:43.335953       6 log.go:172] (0xc004c62370) Reply frame received for 1
I0826 22:56:43.335996       6 log.go:172] (0xc004c62370) (0xc001f02e60) Create stream
I0826 22:56:43.336009       6 log.go:172] (0xc004c62370) (0xc001f02e60) Stream added, broadcasting: 3
I0826 22:56:43.336981       6 log.go:172] (0xc004c62370) Reply frame received for 3
I0826 22:56:43.337030       6 log.go:172] (0xc004c62370) (0xc001645180) Create stream
I0826 22:56:43.337046       6 log.go:172] (0xc004c62370) (0xc001645180) Stream added, broadcasting: 5
I0826 22:56:43.337807       6 log.go:172] (0xc004c62370) Reply frame received for 5
I0826 22:56:43.428430       6 log.go:172] (0xc004c62370) Data frame received for 3
I0826 22:56:43.428462       6 log.go:172] (0xc001f02e60) (3) Data frame handling
I0826 22:56:43.428471       6 log.go:172] (0xc001f02e60) (3) Data frame sent
I0826 22:56:43.428477       6 log.go:172] (0xc004c62370) Data frame received for 3
I0826 22:56:43.428499       6 log.go:172] (0xc001f02e60) (3) Data frame handling
I0826 22:56:43.428520       6 log.go:172] (0xc004c62370) Data frame received for 5
I0826 22:56:43.428531       6 log.go:172] (0xc001645180) (5) Data frame handling
I0826 22:56:43.430179       6 log.go:172] (0xc004c62370) Data frame received for 1
I0826 22:56:43.430211       6 log.go:172] (0xc001660780) (1) Data frame handling
I0826 22:56:43.430247       6 log.go:172] (0xc001660780) (1) Data frame sent
I0826 22:56:43.430308       6 log.go:172] (0xc004c62370) (0xc001660780) Stream removed, broadcasting: 1
I0826 22:56:43.430418       6 log.go:172] (0xc004c62370) (0xc001660780) Stream removed, broadcasting: 1
I0826 22:56:43.430429       6 log.go:172] (0xc004c62370) (0xc001f02e60) Stream removed, broadcasting: 3
I0826 22:56:43.430537       6 log.go:172] (0xc004c62370) Go away received
I0826 22:56:43.430634       6 log.go:172] (0xc004c62370) (0xc001645180) Stream removed, broadcasting: 5
Aug 26 22:56:43.430: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 22:56:43.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-574" for this suite.

• [SLOW TEST:28.537 seconds]
[sig-network] Networking
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":62,"skipped":799,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 22:56:43.454: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with configMap that has name projected-configmap-test-upd-9a51ab79-78dc-4680-ac83-b4277c98dfe9
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-9a51ab79-78dc-4680-ac83-b4277c98dfe9
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 22:56:49.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2417" for this suite.

• [SLOW TEST:6.457 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":63,"skipped":809,"failed":0}
S
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 22:56:49.911: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Aug 26 22:56:50.649: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-2102 /api/v1/namespaces/watch-2102/configmaps/e2e-watch-test-resource-version e319a04c-89f8-46b3-b24b-76d8e6ba813d 4030144 0 2020-08-26 22:56:50 +0000 UTC   map[watch-this-configmap:from-resource-version] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Aug 26 22:56:50.649: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-2102 /api/v1/namespaces/watch-2102/configmaps/e2e-watch-test-resource-version e319a04c-89f8-46b3-b24b-76d8e6ba813d 4030145 0 2020-08-26 22:56:50 +0000 UTC   map[watch-this-configmap:from-resource-version] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 22:56:50.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-2102" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":278,"completed":64,"skipped":810,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 22:56:50.676: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Aug 26 22:56:51.506: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-5234 /api/v1/namespaces/watch-5234/configmaps/e2e-watch-test-watch-closed 7fc384b2-fcc9-47c3-a28c-d58158c55c3b 4030155 0 2020-08-26 22:56:51 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Aug 26 22:56:51.506: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-5234 /api/v1/namespaces/watch-5234/configmaps/e2e-watch-test-watch-closed 7fc384b2-fcc9-47c3-a28c-d58158c55c3b 4030156 0 2020-08-26 22:56:51 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Aug 26 22:56:51.634: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-5234 /api/v1/namespaces/watch-5234/configmaps/e2e-watch-test-watch-closed 7fc384b2-fcc9-47c3-a28c-d58158c55c3b 4030157 0 2020-08-26 22:56:51 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Aug 26 22:56:51.634: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-5234 /api/v1/namespaces/watch-5234/configmaps/e2e-watch-test-watch-closed 7fc384b2-fcc9-47c3-a28c-d58158c55c3b 4030158 0 2020-08-26 22:56:51 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 22:56:51.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-5234" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":278,"completed":65,"skipped":894,"failed":0}
SSSSSS
------------------------------
[sig-network] DNS 
  should support configurable pod DNS nameservers [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 22:56:51.659: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support configurable pod DNS nameservers [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod with dnsPolicy=None and customized dnsConfig...
Aug 26 22:56:51.835: INFO: Created pod &Pod{ObjectMeta:{dns-9000  dns-9000 /api/v1/namespaces/dns-9000/pods/dns-9000 d8ef1f25-e8df-4d54-b879-8b6478ce9d13 4030164 0 2020-08-26 22:56:51 +0000 UTC   map[] map[] [] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6ckp9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6ckp9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6ckp9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
STEP: Verifying customized DNS suffix list is configured on pod...
Aug 26 22:56:55.974: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-9000 PodName:dns-9000 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 26 22:56:55.974: INFO: >>> kubeConfig: /root/.kube/config
I0826 22:56:56.006562       6 log.go:172] (0xc004c629a0) (0xc000e65040) Create stream
I0826 22:56:56.006604       6 log.go:172] (0xc004c629a0) (0xc000e65040) Stream added, broadcasting: 1
I0826 22:56:56.008530       6 log.go:172] (0xc004c629a0) Reply frame received for 1
I0826 22:56:56.008560       6 log.go:172] (0xc004c629a0) (0xc001c5bcc0) Create stream
I0826 22:56:56.008569       6 log.go:172] (0xc004c629a0) (0xc001c5bcc0) Stream added, broadcasting: 3
I0826 22:56:56.009503       6 log.go:172] (0xc004c629a0) Reply frame received for 3
I0826 22:56:56.009535       6 log.go:172] (0xc004c629a0) (0xc0009da8c0) Create stream
I0826 22:56:56.009545       6 log.go:172] (0xc004c629a0) (0xc0009da8c0) Stream added, broadcasting: 5
I0826 22:56:56.010316       6 log.go:172] (0xc004c629a0) Reply frame received for 5
I0826 22:56:56.098537       6 log.go:172] (0xc004c629a0) Data frame received for 3
I0826 22:56:56.098580       6 log.go:172] (0xc001c5bcc0) (3) Data frame handling
I0826 22:56:56.098595       6 log.go:172] (0xc001c5bcc0) (3) Data frame sent
I0826 22:56:56.101831       6 log.go:172] (0xc004c629a0) Data frame received for 3
I0826 22:56:56.101854       6 log.go:172] (0xc001c5bcc0) (3) Data frame handling
I0826 22:56:56.101874       6 log.go:172] (0xc004c629a0) Data frame received for 5
I0826 22:56:56.101916       6 log.go:172] (0xc0009da8c0) (5) Data frame handling
I0826 22:56:56.103826       6 log.go:172] (0xc004c629a0) Data frame received for 1
I0826 22:56:56.103899       6 log.go:172] (0xc000e65040) (1) Data frame handling
I0826 22:56:56.103943       6 log.go:172] (0xc000e65040) (1) Data frame sent
I0826 22:56:56.103972       6 log.go:172] (0xc004c629a0) (0xc000e65040) Stream removed, broadcasting: 1
I0826 22:56:56.104000       6 log.go:172] (0xc004c629a0) Go away received
I0826 22:56:56.104167       6 log.go:172] (0xc004c629a0) (0xc000e65040) Stream removed, broadcasting: 1
I0826 22:56:56.104196       6 log.go:172] (0xc004c629a0) (0xc001c5bcc0) Stream removed, broadcasting: 3
I0826 22:56:56.104213       6 log.go:172] (0xc004c629a0) (0xc0009da8c0) Stream removed, broadcasting: 5
STEP: Verifying customized DNS server is configured on pod...
Aug 26 22:56:56.104: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-9000 PodName:dns-9000 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 26 22:56:56.104: INFO: >>> kubeConfig: /root/.kube/config
I0826 22:56:56.453042       6 log.go:172] (0xc0042acd10) (0xc000aad5e0) Create stream
I0826 22:56:56.453073       6 log.go:172] (0xc0042acd10) (0xc000aad5e0) Stream added, broadcasting: 1
I0826 22:56:56.460421       6 log.go:172] (0xc0042acd10) Reply frame received for 1
I0826 22:56:56.460459       6 log.go:172] (0xc0042acd10) (0xc000aac0a0) Create stream
I0826 22:56:56.460469       6 log.go:172] (0xc0042acd10) (0xc000aac0a0) Stream added, broadcasting: 3
I0826 22:56:56.461415       6 log.go:172] (0xc0042acd10) Reply frame received for 3
I0826 22:56:56.461440       6 log.go:172] (0xc0042acd10) (0xc000aacf00) Create stream
I0826 22:56:56.461450       6 log.go:172] (0xc0042acd10) (0xc000aacf00) Stream added, broadcasting: 5
I0826 22:56:56.462210       6 log.go:172] (0xc0042acd10) Reply frame received for 5
I0826 22:56:56.539371       6 log.go:172] (0xc0042acd10) Data frame received for 3
I0826 22:56:56.539407       6 log.go:172] (0xc000aac0a0) (3) Data frame handling
I0826 22:56:56.539422       6 log.go:172] (0xc000aac0a0) (3) Data frame sent
I0826 22:56:56.542303       6 log.go:172] (0xc0042acd10) Data frame received for 3
I0826 22:56:56.542340       6 log.go:172] (0xc000aac0a0) (3) Data frame handling
I0826 22:56:56.542370       6 log.go:172] (0xc0042acd10) Data frame received for 5
I0826 22:56:56.542387       6 log.go:172] (0xc000aacf00) (5) Data frame handling
I0826 22:56:56.543607       6 log.go:172] (0xc0042acd10) Data frame received for 1
I0826 22:56:56.543627       6 log.go:172] (0xc000aad5e0) (1) Data frame handling
I0826 22:56:56.543649       6 log.go:172] (0xc000aad5e0) (1) Data frame sent
I0826 22:56:56.543666       6 log.go:172] (0xc0042acd10) (0xc000aad5e0) Stream removed, broadcasting: 1
I0826 22:56:56.543700       6 log.go:172] (0xc0042acd10) Go away received
I0826 22:56:56.543821       6 log.go:172] (0xc0042acd10) (0xc000aad5e0) Stream removed, broadcasting: 1
I0826 22:56:56.543851       6 log.go:172] (0xc0042acd10) (0xc000aac0a0) Stream removed, broadcasting: 3
I0826 22:56:56.543880       6 log.go:172] (0xc0042acd10) (0xc000aacf00) Stream removed, broadcasting: 5
Aug 26 22:56:56.543: INFO: Deleting pod dns-9000...
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 22:56:56.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-9000" for this suite.
•{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":278,"completed":66,"skipped":900,"failed":0}
SS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Subpath
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 22:56:56.631: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod pod-subpath-test-downwardapi-hzwn
STEP: Creating a pod to test atomic-volume-subpath
Aug 26 22:56:57.203: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-hzwn" in namespace "subpath-8270" to be "success or failure"
Aug 26 22:56:57.222: INFO: Pod "pod-subpath-test-downwardapi-hzwn": Phase="Pending", Reason="", readiness=false. Elapsed: 18.99109ms
Aug 26 22:56:59.393: INFO: Pod "pod-subpath-test-downwardapi-hzwn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.189520139s
Aug 26 22:57:01.395: INFO: Pod "pod-subpath-test-downwardapi-hzwn": Phase="Running", Reason="", readiness=true. Elapsed: 4.192314325s
Aug 26 22:57:03.400: INFO: Pod "pod-subpath-test-downwardapi-hzwn": Phase="Running", Reason="", readiness=true. Elapsed: 6.196756341s
Aug 26 22:57:05.404: INFO: Pod "pod-subpath-test-downwardapi-hzwn": Phase="Running", Reason="", readiness=true. Elapsed: 8.200813189s
Aug 26 22:57:07.427: INFO: Pod "pod-subpath-test-downwardapi-hzwn": Phase="Running", Reason="", readiness=true. Elapsed: 10.223371303s
Aug 26 22:57:09.431: INFO: Pod "pod-subpath-test-downwardapi-hzwn": Phase="Running", Reason="", readiness=true. Elapsed: 12.22743992s
Aug 26 22:57:11.434: INFO: Pod "pod-subpath-test-downwardapi-hzwn": Phase="Running", Reason="", readiness=true. Elapsed: 14.230905396s
Aug 26 22:57:13.440: INFO: Pod "pod-subpath-test-downwardapi-hzwn": Phase="Running", Reason="", readiness=true. Elapsed: 16.237255577s
Aug 26 22:57:15.443: INFO: Pod "pod-subpath-test-downwardapi-hzwn": Phase="Running", Reason="", readiness=true. Elapsed: 18.240148373s
Aug 26 22:57:17.479: INFO: Pod "pod-subpath-test-downwardapi-hzwn": Phase="Running", Reason="", readiness=true. Elapsed: 20.27592005s
Aug 26 22:57:19.483: INFO: Pod "pod-subpath-test-downwardapi-hzwn": Phase="Running", Reason="", readiness=true. Elapsed: 22.280055041s
Aug 26 22:57:21.490: INFO: Pod "pod-subpath-test-downwardapi-hzwn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.287034481s
STEP: Saw pod success
Aug 26 22:57:21.490: INFO: Pod "pod-subpath-test-downwardapi-hzwn" satisfied condition "success or failure"
Aug 26 22:57:21.493: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-downwardapi-hzwn container test-container-subpath-downwardapi-hzwn: 
STEP: delete the pod
Aug 26 22:57:21.548: INFO: Waiting for pod pod-subpath-test-downwardapi-hzwn to disappear
Aug 26 22:57:21.654: INFO: Pod pod-subpath-test-downwardapi-hzwn no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-hzwn
Aug 26 22:57:21.654: INFO: Deleting pod "pod-subpath-test-downwardapi-hzwn" in namespace "subpath-8270"
[AfterEach] [sig-storage] Subpath
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 22:57:21.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-8270" for this suite.

• [SLOW TEST:25.033 seconds]
[sig-storage] Subpath
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with downward pod [LinuxOnly] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":278,"completed":67,"skipped":902,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 22:57:21.665: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-map-f99597cf-2c68-4422-9cc4-1b5bbb6f6d99
STEP: Creating a pod to test consume secrets
Aug 26 22:57:21.955: INFO: Waiting up to 5m0s for pod "pod-secrets-679cd735-8bf8-436e-ac24-a6cd0fbaed54" in namespace "secrets-2507" to be "success or failure"
Aug 26 22:57:21.972: INFO: Pod "pod-secrets-679cd735-8bf8-436e-ac24-a6cd0fbaed54": Phase="Pending", Reason="", readiness=false. Elapsed: 16.691507ms
Aug 26 22:57:24.058: INFO: Pod "pod-secrets-679cd735-8bf8-436e-ac24-a6cd0fbaed54": Phase="Pending", Reason="", readiness=false. Elapsed: 2.103205204s
Aug 26 22:57:26.062: INFO: Pod "pod-secrets-679cd735-8bf8-436e-ac24-a6cd0fbaed54": Phase="Pending", Reason="", readiness=false. Elapsed: 4.107388668s
Aug 26 22:57:28.067: INFO: Pod "pod-secrets-679cd735-8bf8-436e-ac24-a6cd0fbaed54": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.111758326s
STEP: Saw pod success
Aug 26 22:57:28.067: INFO: Pod "pod-secrets-679cd735-8bf8-436e-ac24-a6cd0fbaed54" satisfied condition "success or failure"
Aug 26 22:57:28.070: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-679cd735-8bf8-436e-ac24-a6cd0fbaed54 container secret-volume-test: 
STEP: delete the pod
Aug 26 22:57:28.106: INFO: Waiting for pod pod-secrets-679cd735-8bf8-436e-ac24-a6cd0fbaed54 to disappear
Aug 26 22:57:28.141: INFO: Pod pod-secrets-679cd735-8bf8-436e-ac24-a6cd0fbaed54 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 22:57:28.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2507" for this suite.

• [SLOW TEST:6.497 seconds]
[sig-storage] Secrets
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":68,"skipped":950,"failed":0}
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 22:57:28.162: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 22:57:28.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-1481" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":278,"completed":69,"skipped":950,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to create a functioning NodePort service [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 22:57:28.364: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to create a functioning NodePort service [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating service nodeport-test with type=NodePort in namespace services-9434
STEP: creating replication controller nodeport-test in namespace services-9434
I0826 22:57:28.510361       6 runners.go:189] Created replication controller with name: nodeport-test, namespace: services-9434, replica count: 2
I0826 22:57:31.560852       6 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0826 22:57:34.561057       6 runners.go:189] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Aug 26 22:57:34.561: INFO: Creating new exec pod
Aug 26 22:57:41.601: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9434 execpodbtzbv -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80'
Aug 26 22:57:41.825: INFO: stderr: "I0826 22:57:41.742336     541 log.go:172] (0xc0000f6a50) (0xc0007014a0) Create stream\nI0826 22:57:41.742492     541 log.go:172] (0xc0000f6a50) (0xc0007014a0) Stream added, broadcasting: 1\nI0826 22:57:41.744949     541 log.go:172] (0xc0000f6a50) Reply frame received for 1\nI0826 22:57:41.744982     541 log.go:172] (0xc0000f6a50) (0xc0005b7a40) Create stream\nI0826 22:57:41.744991     541 log.go:172] (0xc0000f6a50) (0xc0005b7a40) Stream added, broadcasting: 3\nI0826 22:57:41.745800     541 log.go:172] (0xc0000f6a50) Reply frame received for 3\nI0826 22:57:41.745836     541 log.go:172] (0xc0000f6a50) (0xc000236000) Create stream\nI0826 22:57:41.745849     541 log.go:172] (0xc0000f6a50) (0xc000236000) Stream added, broadcasting: 5\nI0826 22:57:41.746661     541 log.go:172] (0xc0000f6a50) Reply frame received for 5\nI0826 22:57:41.818406     541 log.go:172] (0xc0000f6a50) Data frame received for 5\nI0826 22:57:41.818429     541 log.go:172] (0xc000236000) (5) Data frame handling\nI0826 22:57:41.818440     541 log.go:172] (0xc000236000) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0826 22:57:41.819155     541 log.go:172] (0xc0000f6a50) Data frame received for 3\nI0826 22:57:41.819184     541 log.go:172] (0xc0005b7a40) (3) Data frame handling\nI0826 22:57:41.819203     541 log.go:172] (0xc0000f6a50) Data frame received for 5\nI0826 22:57:41.819227     541 log.go:172] (0xc000236000) (5) Data frame handling\nI0826 22:57:41.819241     541 log.go:172] (0xc000236000) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0826 22:57:41.819269     541 log.go:172] (0xc0000f6a50) Data frame received for 5\nI0826 22:57:41.819280     541 log.go:172] (0xc000236000) (5) Data frame handling\nI0826 22:57:41.821237     541 log.go:172] (0xc0000f6a50) Data frame received for 1\nI0826 22:57:41.821257     541 log.go:172] (0xc0007014a0) (1) Data frame handling\nI0826 22:57:41.821266     541 log.go:172] (0xc0007014a0) (1) Data frame sent\nI0826 22:57:41.821276     541 log.go:172] (0xc0000f6a50) (0xc0007014a0) Stream removed, broadcasting: 1\nI0826 22:57:41.821293     541 log.go:172] (0xc0000f6a50) Go away received\nI0826 22:57:41.821593     541 log.go:172] (0xc0000f6a50) (0xc0007014a0) Stream removed, broadcasting: 1\nI0826 22:57:41.821606     541 log.go:172] (0xc0000f6a50) (0xc0005b7a40) Stream removed, broadcasting: 3\nI0826 22:57:41.821612     541 log.go:172] (0xc0000f6a50) (0xc000236000) Stream removed, broadcasting: 5\n"
Aug 26 22:57:41.825: INFO: stdout: ""
Aug 26 22:57:41.826: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9434 execpodbtzbv -- /bin/sh -x -c nc -zv -t -w 2 10.111.0.4 80'
Aug 26 22:57:42.028: INFO: stderr: "I0826 22:57:41.946915     559 log.go:172] (0xc000456000) (0xc000688780) Create stream\nI0826 22:57:41.946996     559 log.go:172] (0xc000456000) (0xc000688780) Stream added, broadcasting: 1\nI0826 22:57:41.948940     559 log.go:172] (0xc000456000) Reply frame received for 1\nI0826 22:57:41.949000     559 log.go:172] (0xc000456000) (0xc000531540) Create stream\nI0826 22:57:41.949026     559 log.go:172] (0xc000456000) (0xc000531540) Stream added, broadcasting: 3\nI0826 22:57:41.950054     559 log.go:172] (0xc000456000) Reply frame received for 3\nI0826 22:57:41.950091     559 log.go:172] (0xc000456000) (0xc0005315e0) Create stream\nI0826 22:57:41.950110     559 log.go:172] (0xc000456000) (0xc0005315e0) Stream added, broadcasting: 5\nI0826 22:57:41.951136     559 log.go:172] (0xc000456000) Reply frame received for 5\nI0826 22:57:42.017472     559 log.go:172] (0xc000456000) Data frame received for 5\nI0826 22:57:42.017512     559 log.go:172] (0xc0005315e0) (5) Data frame handling\nI0826 22:57:42.017524     559 log.go:172] (0xc0005315e0) (5) Data frame sent\nI0826 22:57:42.017532     559 log.go:172] (0xc000456000) Data frame received for 5\nI0826 22:57:42.017540     559 log.go:172] (0xc0005315e0) (5) Data frame handling\n+ nc -zv -t -w 2 10.111.0.4 80\nConnection to 10.111.0.4 80 port [tcp/http] succeeded!\nI0826 22:57:42.017577     559 log.go:172] (0xc000456000) Data frame received for 3\nI0826 22:57:42.017606     559 log.go:172] (0xc000531540) (3) Data frame handling\nI0826 22:57:42.019253     559 log.go:172] (0xc000456000) Data frame received for 1\nI0826 22:57:42.019284     559 log.go:172] (0xc000688780) (1) Data frame handling\nI0826 22:57:42.019303     559 log.go:172] (0xc000688780) (1) Data frame sent\nI0826 22:57:42.019318     559 log.go:172] (0xc000456000) (0xc000688780) Stream removed, broadcasting: 1\nI0826 22:57:42.019334     559 log.go:172] (0xc000456000) Go away received\nI0826 22:57:42.019770     559 log.go:172] (0xc000456000) (0xc000688780) Stream removed, broadcasting: 1\nI0826 22:57:42.019792     559 log.go:172] (0xc000456000) (0xc000531540) Stream removed, broadcasting: 3\nI0826 22:57:42.019803     559 log.go:172] (0xc000456000) (0xc0005315e0) Stream removed, broadcasting: 5\n"
Aug 26 22:57:42.028: INFO: stdout: ""
Aug 26 22:57:42.028: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9434 execpodbtzbv -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.6 30462'
Aug 26 22:57:42.229: INFO: stderr: "I0826 22:57:42.157555     580 log.go:172] (0xc000101550) (0xc00099e000) Create stream\nI0826 22:57:42.157633     580 log.go:172] (0xc000101550) (0xc00099e000) Stream added, broadcasting: 1\nI0826 22:57:42.160389     580 log.go:172] (0xc000101550) Reply frame received for 1\nI0826 22:57:42.160410     580 log.go:172] (0xc000101550) (0xc00099e0a0) Create stream\nI0826 22:57:42.160418     580 log.go:172] (0xc000101550) (0xc00099e0a0) Stream added, broadcasting: 3\nI0826 22:57:42.161653     580 log.go:172] (0xc000101550) Reply frame received for 3\nI0826 22:57:42.161706     580 log.go:172] (0xc000101550) (0xc00064f9a0) Create stream\nI0826 22:57:42.161722     580 log.go:172] (0xc000101550) (0xc00064f9a0) Stream added, broadcasting: 5\nI0826 22:57:42.162733     580 log.go:172] (0xc000101550) Reply frame received for 5\nI0826 22:57:42.219682     580 log.go:172] (0xc000101550) Data frame received for 5\nI0826 22:57:42.219710     580 log.go:172] (0xc00064f9a0) (5) Data frame handling\nI0826 22:57:42.219724     580 log.go:172] (0xc00064f9a0) (5) Data frame sent\nI0826 22:57:42.219729     580 log.go:172] (0xc000101550) Data frame received for 5\nI0826 22:57:42.219736     580 log.go:172] (0xc00064f9a0) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.6 30462\nConnection to 172.18.0.6 30462 port [tcp/30462] succeeded!\nI0826 22:57:42.220138     580 log.go:172] (0xc000101550) Data frame received for 3\nI0826 22:57:42.220170     580 log.go:172] (0xc00099e0a0) (3) Data frame handling\nI0826 22:57:42.221703     580 log.go:172] (0xc000101550) Data frame received for 1\nI0826 22:57:42.221724     580 log.go:172] (0xc00099e000) (1) Data frame handling\nI0826 22:57:42.221736     580 log.go:172] (0xc00099e000) (1) Data frame sent\nI0826 22:57:42.221745     580 log.go:172] (0xc000101550) (0xc00099e000) Stream removed, broadcasting: 1\nI0826 22:57:42.221820     580 log.go:172] (0xc000101550) Go away received\nI0826 22:57:42.222049     580 log.go:172] (0xc000101550) (0xc00099e000) Stream removed, broadcasting: 1\nI0826 22:57:42.222067     580 log.go:172] (0xc000101550) (0xc00099e0a0) Stream removed, broadcasting: 3\nI0826 22:57:42.222074     580 log.go:172] (0xc000101550) (0xc00064f9a0) Stream removed, broadcasting: 5\n"
Aug 26 22:57:42.229: INFO: stdout: ""
Aug 26 22:57:42.229: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9434 execpodbtzbv -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.3 30462'
Aug 26 22:57:42.433: INFO: stderr: "I0826 22:57:42.351470     602 log.go:172] (0xc0000f91e0) (0xc0009f6000) Create stream\nI0826 22:57:42.351526     602 log.go:172] (0xc0000f91e0) (0xc0009f6000) Stream added, broadcasting: 1\nI0826 22:57:42.354702     602 log.go:172] (0xc0000f91e0) Reply frame received for 1\nI0826 22:57:42.354740     602 log.go:172] (0xc0000f91e0) (0xc000667b80) Create stream\nI0826 22:57:42.354755     602 log.go:172] (0xc0000f91e0) (0xc000667b80) Stream added, broadcasting: 3\nI0826 22:57:42.355800     602 log.go:172] (0xc0000f91e0) Reply frame received for 3\nI0826 22:57:42.355847     602 log.go:172] (0xc0000f91e0) (0xc0009f60a0) Create stream\nI0826 22:57:42.355860     602 log.go:172] (0xc0000f91e0) (0xc0009f60a0) Stream added, broadcasting: 5\nI0826 22:57:42.356950     602 log.go:172] (0xc0000f91e0) Reply frame received for 5\nI0826 22:57:42.419058     602 log.go:172] (0xc0000f91e0) Data frame received for 5\nI0826 22:57:42.419098     602 log.go:172] (0xc0009f60a0) (5) Data frame handling\nI0826 22:57:42.419113     602 log.go:172] (0xc0009f60a0) (5) Data frame sent\n+ nc -zv -t -w 2 172.18.0.3 30462\nConnection to 172.18.0.3 30462 port [tcp/30462] succeeded!\nI0826 22:57:42.419162     602 log.go:172] (0xc0000f91e0) Data frame received for 3\nI0826 22:57:42.419190     602 log.go:172] (0xc000667b80) (3) Data frame handling\nI0826 22:57:42.419330     602 log.go:172] (0xc0000f91e0) Data frame received for 5\nI0826 22:57:42.419353     602 log.go:172] (0xc0009f60a0) (5) Data frame handling\nI0826 22:57:42.421164     602 log.go:172] (0xc0000f91e0) Data frame received for 1\nI0826 22:57:42.421187     602 log.go:172] (0xc0009f6000) (1) Data frame handling\nI0826 22:57:42.421199     602 log.go:172] (0xc0009f6000) (1) Data frame sent\nI0826 22:57:42.421227     602 log.go:172] (0xc0000f91e0) (0xc0009f6000) Stream removed, broadcasting: 1\nI0826 22:57:42.421249     602 log.go:172] (0xc0000f91e0) Go away received\nI0826 22:57:42.421648     602 log.go:172] (0xc0000f91e0) (0xc0009f6000) Stream removed, broadcasting: 1\nI0826 22:57:42.421669     602 log.go:172] (0xc0000f91e0) (0xc000667b80) Stream removed, broadcasting: 3\nI0826 22:57:42.421677     602 log.go:172] (0xc0000f91e0) (0xc0009f60a0) Stream removed, broadcasting: 5\n"
Aug 26 22:57:42.433: INFO: stdout: ""
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 22:57:42.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-9434" for this suite.
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:14.077 seconds]
[sig-network] Services
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to create a functioning NodePort service [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":278,"completed":70,"skipped":993,"failed":0}
SSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 22:57:42.441: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-89a39d00-1944-4639-a2a8-2df488d2c7ff
STEP: Creating a pod to test consume configMaps
Aug 26 22:57:42.566: INFO: Waiting up to 5m0s for pod "pod-configmaps-909b00c4-a519-4b0c-971e-96b2015fc040" in namespace "configmap-9701" to be "success or failure"
Aug 26 22:57:42.584: INFO: Pod "pod-configmaps-909b00c4-a519-4b0c-971e-96b2015fc040": Phase="Pending", Reason="", readiness=false. Elapsed: 17.04205ms
Aug 26 22:57:44.588: INFO: Pod "pod-configmaps-909b00c4-a519-4b0c-971e-96b2015fc040": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021358012s
Aug 26 22:57:46.596: INFO: Pod "pod-configmaps-909b00c4-a519-4b0c-971e-96b2015fc040": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029689345s
STEP: Saw pod success
Aug 26 22:57:46.596: INFO: Pod "pod-configmaps-909b00c4-a519-4b0c-971e-96b2015fc040" satisfied condition "success or failure"
Aug 26 22:57:46.599: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-909b00c4-a519-4b0c-971e-96b2015fc040 container configmap-volume-test: 
STEP: delete the pod
Aug 26 22:57:46.619: INFO: Waiting for pod pod-configmaps-909b00c4-a519-4b0c-971e-96b2015fc040 to disappear
Aug 26 22:57:46.630: INFO: Pod pod-configmaps-909b00c4-a519-4b0c-971e-96b2015fc040 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 22:57:46.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9701" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":71,"skipped":996,"failed":0}
SSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should unconditionally reject operations on fail closed webhook [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 22:57:46.636: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 26 22:57:47.701: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 26 22:57:49.745: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734079467, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734079467, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734079467, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734079467, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 26 22:57:52.793: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should unconditionally reject operations on fail closed webhook [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API
STEP: create a namespace for the webhook
STEP: create a configmap should be unconditionally rejected by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 22:57:52.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-885" for this suite.
STEP: Destroying namespace "webhook-885-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:6.405 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should unconditionally reject operations on fail closed webhook [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":278,"completed":72,"skipped":1006,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 22:57:53.042: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0826 22:58:05.321875       6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug 26 22:58:05.321: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 22:58:05.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-5300" for this suite.

• [SLOW TEST:12.311 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":278,"completed":73,"skipped":1062,"failed":0}
S
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 22:58:05.353: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-map-aa62fc71-31d2-406e-beb7-351243a5d43d
STEP: Creating a pod to test consume configMaps
Aug 26 22:58:06.556: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0d3a6013-009b-46a9-bc7c-049abacc535f" in namespace "projected-421" to be "success or failure"
Aug 26 22:58:06.926: INFO: Pod "pod-projected-configmaps-0d3a6013-009b-46a9-bc7c-049abacc535f": Phase="Pending", Reason="", readiness=false. Elapsed: 370.388658ms
Aug 26 22:58:08.930: INFO: Pod "pod-projected-configmaps-0d3a6013-009b-46a9-bc7c-049abacc535f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.374433826s
Aug 26 22:58:10.937: INFO: Pod "pod-projected-configmaps-0d3a6013-009b-46a9-bc7c-049abacc535f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.381619106s
STEP: Saw pod success
Aug 26 22:58:10.937: INFO: Pod "pod-projected-configmaps-0d3a6013-009b-46a9-bc7c-049abacc535f" satisfied condition "success or failure"
Aug 26 22:58:10.943: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-0d3a6013-009b-46a9-bc7c-049abacc535f container projected-configmap-volume-test: 
STEP: delete the pod
Aug 26 22:58:11.004: INFO: Waiting for pod pod-projected-configmaps-0d3a6013-009b-46a9-bc7c-049abacc535f to disappear
Aug 26 22:58:11.009: INFO: Pod pod-projected-configmaps-0d3a6013-009b-46a9-bc7c-049abacc535f no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 22:58:11.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-421" for this suite.

• [SLOW TEST:5.669 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":74,"skipped":1063,"failed":0}
SSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 22:58:11.022: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-9879
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace statefulset-9879
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-9879
Aug 26 22:58:11.393: INFO: Found 0 stateful pods, waiting for 1
Aug 26 22:58:21.398: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Aug 26 22:58:21.401: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9879 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 26 22:58:21.690: INFO: stderr: "I0826 22:58:21.554915     624 log.go:172] (0xc000a92630) (0xc0009b0000) Create stream\nI0826 22:58:21.554997     624 log.go:172] (0xc000a92630) (0xc0009b0000) Stream added, broadcasting: 1\nI0826 22:58:21.560479     624 log.go:172] (0xc000a92630) Reply frame received for 1\nI0826 22:58:21.560526     624 log.go:172] (0xc000a92630) (0xc000962000) Create stream\nI0826 22:58:21.560541     624 log.go:172] (0xc000a92630) (0xc000962000) Stream added, broadcasting: 3\nI0826 22:58:21.561533     624 log.go:172] (0xc000a92630) Reply frame received for 3\nI0826 22:58:21.561567     624 log.go:172] (0xc000a92630) (0xc0009b00a0) Create stream\nI0826 22:58:21.561579     624 log.go:172] (0xc000a92630) (0xc0009b00a0) Stream added, broadcasting: 5\nI0826 22:58:21.562665     624 log.go:172] (0xc000a92630) Reply frame received for 5\nI0826 22:58:21.650812     624 log.go:172] (0xc000a92630) Data frame received for 5\nI0826 22:58:21.650843     624 log.go:172] (0xc0009b00a0) (5) Data frame handling\nI0826 22:58:21.650860     624 log.go:172] (0xc0009b00a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0826 22:58:21.675256     624 log.go:172] (0xc000a92630) Data frame received for 3\nI0826 22:58:21.675297     624 log.go:172] (0xc000962000) (3) Data frame handling\nI0826 22:58:21.675328     624 log.go:172] (0xc000962000) (3) Data frame sent\nI0826 22:58:21.675432     624 log.go:172] (0xc000a92630) Data frame received for 3\nI0826 22:58:21.675467     624 log.go:172] (0xc000962000) (3) Data frame handling\nI0826 22:58:21.675672     624 log.go:172] (0xc000a92630) Data frame received for 5\nI0826 22:58:21.675695     624 log.go:172] (0xc0009b00a0) (5) Data frame handling\nI0826 22:58:21.677433     624 log.go:172] (0xc000a92630) Data frame received for 1\nI0826 22:58:21.677463     624 log.go:172] (0xc0009b0000) (1) Data frame handling\nI0826 22:58:21.677480     624 log.go:172] (0xc0009b0000) (1) Data frame sent\nI0826 22:58:21.677498     624 log.go:172] (0xc000a92630) (0xc0009b0000) Stream removed, broadcasting: 1\nI0826 22:58:21.677592     624 log.go:172] (0xc000a92630) Go away received\nI0826 22:58:21.677921     624 log.go:172] (0xc000a92630) (0xc0009b0000) Stream removed, broadcasting: 1\nI0826 22:58:21.677949     624 log.go:172] (0xc000a92630) (0xc000962000) Stream removed, broadcasting: 3\nI0826 22:58:21.677969     624 log.go:172] (0xc000a92630) (0xc0009b00a0) Stream removed, broadcasting: 5\n"
Aug 26 22:58:21.690: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 26 22:58:21.690: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Aug 26 22:58:21.693: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Aug 26 22:58:31.706: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Aug 26 22:58:31.706: INFO: Waiting for statefulset status.replicas updated to 0
Aug 26 22:58:31.820: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999748s
Aug 26 22:58:32.824: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.987567313s
Aug 26 22:58:33.829: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.983105233s
Aug 26 22:58:34.832: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.978375094s
Aug 26 22:58:35.837: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.975069715s
Aug 26 22:58:36.841: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.970221145s
Aug 26 22:58:37.845: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.965984492s
Aug 26 22:58:38.851: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.961801678s
Aug 26 22:58:39.855: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.95633816s
Aug 26 22:58:40.860: INFO: Verifying statefulset ss doesn't scale past 1 for another 951.85815ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-9879
Aug 26 22:58:41.865: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9879 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 22:58:42.087: INFO: stderr: "I0826 22:58:41.994708     645 log.go:172] (0xc000114580) (0xc0005926e0) Create stream\nI0826 22:58:41.994763     645 log.go:172] (0xc000114580) (0xc0005926e0) Stream added, broadcasting: 1\nI0826 22:58:41.997224     645 log.go:172] (0xc000114580) Reply frame received for 1\nI0826 22:58:41.997258     645 log.go:172] (0xc000114580) (0xc000906000) Create stream\nI0826 22:58:41.997268     645 log.go:172] (0xc000114580) (0xc000906000) Stream added, broadcasting: 3\nI0826 22:58:41.998282     645 log.go:172] (0xc000114580) Reply frame received for 3\nI0826 22:58:41.998333     645 log.go:172] (0xc000114580) (0xc0009a0000) Create stream\nI0826 22:58:41.998349     645 log.go:172] (0xc000114580) (0xc0009a0000) Stream added, broadcasting: 5\nI0826 22:58:41.999304     645 log.go:172] (0xc000114580) Reply frame received for 5\nI0826 22:58:42.076170     645 log.go:172] (0xc000114580) Data frame received for 3\nI0826 22:58:42.076231     645 log.go:172] (0xc000114580) Data frame received for 5\nI0826 22:58:42.076284     645 log.go:172] (0xc0009a0000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0826 22:58:42.076318     645 log.go:172] (0xc000906000) (3) Data frame handling\nI0826 22:58:42.076350     645 log.go:172] (0xc000906000) (3) Data frame sent\nI0826 22:58:42.076363     645 log.go:172] (0xc000114580) Data frame received for 3\nI0826 22:58:42.076373     645 log.go:172] (0xc000906000) (3) Data frame handling\nI0826 22:58:42.076397     645 log.go:172] (0xc0009a0000) (5) Data frame sent\nI0826 22:58:42.076407     645 log.go:172] (0xc000114580) Data frame received for 5\nI0826 22:58:42.076418     645 log.go:172] (0xc0009a0000) (5) Data frame handling\nI0826 22:58:42.077840     645 log.go:172] (0xc000114580) Data frame received for 1\nI0826 22:58:42.077865     645 log.go:172] (0xc0005926e0) (1) Data frame handling\nI0826 22:58:42.077884     645 log.go:172] (0xc0005926e0) (1) Data frame sent\nI0826 22:58:42.078051     645 log.go:172] (0xc000114580) (0xc0005926e0) Stream removed, broadcasting: 1\nI0826 22:58:42.078091     645 log.go:172] (0xc000114580) Go away received\nI0826 22:58:42.078474     645 log.go:172] (0xc000114580) (0xc0005926e0) Stream removed, broadcasting: 1\nI0826 22:58:42.078490     645 log.go:172] (0xc000114580) (0xc000906000) Stream removed, broadcasting: 3\nI0826 22:58:42.078499     645 log.go:172] (0xc000114580) (0xc0009a0000) Stream removed, broadcasting: 5\n"
Aug 26 22:58:42.087: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Aug 26 22:58:42.087: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Aug 26 22:58:42.090: INFO: Found 1 stateful pods, waiting for 3
Aug 26 22:58:52.094: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 26 22:58:52.095: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 26 22:58:52.095: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Aug 26 22:58:52.100: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9879 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 26 22:58:52.333: INFO: stderr: "I0826 22:58:52.238419     665 log.go:172] (0xc0000fce70) (0xc000637ae0) Create stream\nI0826 22:58:52.238472     665 log.go:172] (0xc0000fce70) (0xc000637ae0) Stream added, broadcasting: 1\nI0826 22:58:52.241300     665 log.go:172] (0xc0000fce70) Reply frame received for 1\nI0826 22:58:52.241341     665 log.go:172] (0xc0000fce70) (0xc000637cc0) Create stream\nI0826 22:58:52.241351     665 log.go:172] (0xc0000fce70) (0xc000637cc0) Stream added, broadcasting: 3\nI0826 22:58:52.242320     665 log.go:172] (0xc0000fce70) Reply frame received for 3\nI0826 22:58:52.242349     665 log.go:172] (0xc0000fce70) (0xc000ac2000) Create stream\nI0826 22:58:52.242359     665 log.go:172] (0xc0000fce70) (0xc000ac2000) Stream added, broadcasting: 5\nI0826 22:58:52.243329     665 log.go:172] (0xc0000fce70) Reply frame received for 5\nI0826 22:58:52.321444     665 log.go:172] (0xc0000fce70) Data frame received for 5\nI0826 22:58:52.321483     665 log.go:172] (0xc000ac2000) (5) Data frame handling\nI0826 22:58:52.321498     665 log.go:172] (0xc000ac2000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0826 22:58:52.321518     665 log.go:172] (0xc0000fce70) Data frame received for 5\nI0826 22:58:52.321543     665 log.go:172] (0xc000ac2000) (5) Data frame handling\nI0826 22:58:52.321579     665 log.go:172] (0xc0000fce70) Data frame received for 3\nI0826 22:58:52.321601     665 log.go:172] (0xc000637cc0) (3) Data frame handling\nI0826 22:58:52.321621     665 log.go:172] (0xc000637cc0) (3) Data frame sent\nI0826 22:58:52.321634     665 log.go:172] (0xc0000fce70) Data frame received for 3\nI0826 22:58:52.321645     665 log.go:172] (0xc000637cc0) (3) Data frame handling\nI0826 22:58:52.322944     665 log.go:172] (0xc0000fce70) Data frame received for 1\nI0826 22:58:52.323017     665 log.go:172] (0xc000637ae0) (1) Data frame handling\nI0826 22:58:52.323072     665 log.go:172] (0xc000637ae0) (1) Data frame sent\nI0826 22:58:52.323098     665 log.go:172] (0xc0000fce70) (0xc000637ae0) Stream removed, broadcasting: 1\nI0826 22:58:52.323118     665 log.go:172] (0xc0000fce70) Go away received\nI0826 22:58:52.323462     665 log.go:172] (0xc0000fce70) (0xc000637ae0) Stream removed, broadcasting: 1\nI0826 22:58:52.323475     665 log.go:172] (0xc0000fce70) (0xc000637cc0) Stream removed, broadcasting: 3\nI0826 22:58:52.323480     665 log.go:172] (0xc0000fce70) (0xc000ac2000) Stream removed, broadcasting: 5\n"
Aug 26 22:58:52.333: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 26 22:58:52.333: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Aug 26 22:58:52.333: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9879 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 26 22:58:52.596: INFO: stderr: "I0826 22:58:52.461229     685 log.go:172] (0xc000228dc0) (0xc0009920a0) Create stream\nI0826 22:58:52.461285     685 log.go:172] (0xc000228dc0) (0xc0009920a0) Stream added, broadcasting: 1\nI0826 22:58:52.463669     685 log.go:172] (0xc000228dc0) Reply frame received for 1\nI0826 22:58:52.463712     685 log.go:172] (0xc000228dc0) (0xc0005f6780) Create stream\nI0826 22:58:52.463725     685 log.go:172] (0xc000228dc0) (0xc0005f6780) Stream added, broadcasting: 3\nI0826 22:58:52.464866     685 log.go:172] (0xc000228dc0) Reply frame received for 3\nI0826 22:58:52.464909     685 log.go:172] (0xc000228dc0) (0xc0006fdb80) Create stream\nI0826 22:58:52.464927     685 log.go:172] (0xc000228dc0) (0xc0006fdb80) Stream added, broadcasting: 5\nI0826 22:58:52.465903     685 log.go:172] (0xc000228dc0) Reply frame received for 5\nI0826 22:58:52.552558     685 log.go:172] (0xc000228dc0) Data frame received for 5\nI0826 22:58:52.552586     685 log.go:172] (0xc0006fdb80) (5) Data frame handling\nI0826 22:58:52.552598     685 log.go:172] (0xc0006fdb80) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0826 22:58:52.585831     685 log.go:172] (0xc000228dc0) Data frame received for 3\nI0826 22:58:52.585872     685 log.go:172] (0xc0005f6780) (3) Data frame handling\nI0826 22:58:52.585904     685 log.go:172] (0xc0005f6780) (3) Data frame sent\nI0826 22:58:52.585961     685 log.go:172] (0xc000228dc0) Data frame received for 3\nI0826 22:58:52.585986     685 log.go:172] (0xc0005f6780) (3) Data frame handling\nI0826 22:58:52.586243     685 log.go:172] (0xc000228dc0) Data frame received for 5\nI0826 22:58:52.586274     685 log.go:172] (0xc0006fdb80) (5) Data frame handling\nI0826 22:58:52.588403     685 log.go:172] (0xc000228dc0) Data frame received for 1\nI0826 22:58:52.588416     685 log.go:172] (0xc0009920a0) (1) Data frame handling\nI0826 22:58:52.588422     685 log.go:172] (0xc0009920a0) (1) Data frame sent\nI0826 22:58:52.588431     685 log.go:172] (0xc000228dc0) (0xc0009920a0) Stream removed, broadcasting: 1\nI0826 22:58:52.588543     685 log.go:172] (0xc000228dc0) Go away received\nI0826 22:58:52.588695     685 log.go:172] (0xc000228dc0) (0xc0009920a0) Stream removed, broadcasting: 1\nI0826 22:58:52.588707     685 log.go:172] (0xc000228dc0) (0xc0005f6780) Stream removed, broadcasting: 3\nI0826 22:58:52.588713     685 log.go:172] (0xc000228dc0) (0xc0006fdb80) Stream removed, broadcasting: 5\n"
Aug 26 22:58:52.596: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 26 22:58:52.596: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Aug 26 22:58:52.596: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9879 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 26 22:58:52.883: INFO: stderr: "I0826 22:58:52.761110     707 log.go:172] (0xc000118b00) (0xc000423540) Create stream\nI0826 22:58:52.761180     707 log.go:172] (0xc000118b00) (0xc000423540) Stream added, broadcasting: 1\nI0826 22:58:52.763943     707 log.go:172] (0xc000118b00) Reply frame received for 1\nI0826 22:58:52.763985     707 log.go:172] (0xc000118b00) (0xc000936000) Create stream\nI0826 22:58:52.764002     707 log.go:172] (0xc000118b00) (0xc000936000) Stream added, broadcasting: 3\nI0826 22:58:52.765083     707 log.go:172] (0xc000118b00) Reply frame received for 3\nI0826 22:58:52.765132     707 log.go:172] (0xc000118b00) (0xc0009360a0) Create stream\nI0826 22:58:52.765147     707 log.go:172] (0xc000118b00) (0xc0009360a0) Stream added, broadcasting: 5\nI0826 22:58:52.766147     707 log.go:172] (0xc000118b00) Reply frame received for 5\nI0826 22:58:52.839907     707 log.go:172] (0xc000118b00) Data frame received for 5\nI0826 22:58:52.839953     707 log.go:172] (0xc0009360a0) (5) Data frame handling\nI0826 22:58:52.839988     707 log.go:172] (0xc0009360a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0826 22:58:52.871968     707 log.go:172] (0xc000118b00) Data frame received for 3\nI0826 22:58:52.872198     707 log.go:172] (0xc000936000) (3) Data frame handling\nI0826 22:58:52.872311     707 log.go:172] (0xc000936000) (3) Data frame sent\nI0826 22:58:52.874147     707 log.go:172] (0xc000118b00) Data frame received for 5\nI0826 22:58:52.874176     707 log.go:172] (0xc0009360a0) (5) Data frame handling\nI0826 22:58:52.874425     707 log.go:172] (0xc000118b00) Data frame received for 3\nI0826 22:58:52.874461     707 log.go:172] (0xc000936000) (3) Data frame handling\nI0826 22:58:52.876630     707 log.go:172] (0xc000118b00) Data frame received for 1\nI0826 22:58:52.876651     707 log.go:172] (0xc000423540) (1) Data frame handling\nI0826 22:58:52.876662     707 log.go:172] (0xc000423540) (1) Data frame sent\nI0826 22:58:52.876672     707 log.go:172] (0xc000118b00) (0xc000423540) Stream removed, broadcasting: 1\nI0826 22:58:52.876868     707 log.go:172] (0xc000118b00) Go away received\nI0826 22:58:52.877259     707 log.go:172] (0xc000118b00) (0xc000423540) Stream removed, broadcasting: 1\nI0826 22:58:52.877274     707 log.go:172] (0xc000118b00) (0xc000936000) Stream removed, broadcasting: 3\nI0826 22:58:52.877284     707 log.go:172] (0xc000118b00) (0xc0009360a0) Stream removed, broadcasting: 5\n"
Aug 26 22:58:52.884: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 26 22:58:52.884: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Aug 26 22:58:52.884: INFO: Waiting for statefulset status.replicas updated to 0
Aug 26 22:58:52.886: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
Aug 26 22:59:02.912: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Aug 26 22:59:02.912: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Aug 26 22:59:02.912: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Aug 26 22:59:02.928: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999589s
Aug 26 22:59:03.932: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.990745298s
Aug 26 22:59:04.937: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.986083973s
Aug 26 22:59:05.942: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.981631191s
Aug 26 22:59:06.946: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.97662035s
Aug 26 22:59:07.952: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.971873449s
Aug 26 22:59:08.956: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.966695458s
Aug 26 22:59:09.961: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.962085189s
Aug 26 22:59:11.065: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.957675422s
Aug 26 22:59:12.070: INFO: Verifying statefulset ss doesn't scale past 3 for another 853.213143ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-9879
Aug 26 22:59:13.095: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9879 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 22:59:13.336: INFO: stderr: "I0826 22:59:13.237574     728 log.go:172] (0xc000a4c630) (0xc000ac0640) Create stream\nI0826 22:59:13.237623     728 log.go:172] (0xc000a4c630) (0xc000ac0640) Stream added, broadcasting: 1\nI0826 22:59:13.242906     728 log.go:172] (0xc000a4c630) Reply frame received for 1\nI0826 22:59:13.242974     728 log.go:172] (0xc000a4c630) (0xc00065c6e0) Create stream\nI0826 22:59:13.243000     728 log.go:172] (0xc000a4c630) (0xc00065c6e0) Stream added, broadcasting: 3\nI0826 22:59:13.244186     728 log.go:172] (0xc000a4c630) Reply frame received for 3\nI0826 22:59:13.244219     728 log.go:172] (0xc000a4c630) (0xc00052f4a0) Create stream\nI0826 22:59:13.244230     728 log.go:172] (0xc000a4c630) (0xc00052f4a0) Stream added, broadcasting: 5\nI0826 22:59:13.245396     728 log.go:172] (0xc000a4c630) Reply frame received for 5\nI0826 22:59:13.326484     728 log.go:172] (0xc000a4c630) Data frame received for 5\nI0826 22:59:13.326521     728 log.go:172] (0xc00052f4a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0826 22:59:13.326549     728 log.go:172] (0xc000a4c630) Data frame received for 3\nI0826 22:59:13.326582     728 log.go:172] (0xc00065c6e0) (3) Data frame handling\nI0826 22:59:13.326597     728 log.go:172] (0xc00065c6e0) (3) Data frame sent\nI0826 22:59:13.326611     728 log.go:172] (0xc000a4c630) Data frame received for 3\nI0826 22:59:13.326622     728 log.go:172] (0xc00065c6e0) (3) Data frame handling\nI0826 22:59:13.326644     728 log.go:172] (0xc00052f4a0) (5) Data frame sent\nI0826 22:59:13.326663     728 log.go:172] (0xc000a4c630) Data frame received for 5\nI0826 22:59:13.326674     728 log.go:172] (0xc00052f4a0) (5) Data frame handling\nI0826 22:59:13.328143     728 log.go:172] (0xc000a4c630) Data frame received for 1\nI0826 22:59:13.328161     728 log.go:172] (0xc000ac0640) (1) Data frame handling\nI0826 22:59:13.328169     728 log.go:172] (0xc000ac0640) (1) Data frame sent\nI0826 22:59:13.328179     728 log.go:172] (0xc000a4c630) (0xc000ac0640) Stream removed, broadcasting: 1\nI0826 22:59:13.328192     728 log.go:172] (0xc000a4c630) Go away received\nI0826 22:59:13.328653     728 log.go:172] (0xc000a4c630) (0xc000ac0640) Stream removed, broadcasting: 1\nI0826 22:59:13.328677     728 log.go:172] (0xc000a4c630) (0xc00065c6e0) Stream removed, broadcasting: 3\nI0826 22:59:13.328690     728 log.go:172] (0xc000a4c630) (0xc00052f4a0) Stream removed, broadcasting: 5\n"
Aug 26 22:59:13.336: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Aug 26 22:59:13.336: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Aug 26 22:59:13.337: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9879 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 22:59:13.566: INFO: stderr: "I0826 22:59:13.494229     748 log.go:172] (0xc000ac8000) (0xc0002e3400) Create stream\nI0826 22:59:13.494301     748 log.go:172] (0xc000ac8000) (0xc0002e3400) Stream added, broadcasting: 1\nI0826 22:59:13.497179     748 log.go:172] (0xc000ac8000) Reply frame received for 1\nI0826 22:59:13.497220     748 log.go:172] (0xc000ac8000) (0xc0008f40a0) Create stream\nI0826 22:59:13.497232     748 log.go:172] (0xc000ac8000) (0xc0008f40a0) Stream added, broadcasting: 3\nI0826 22:59:13.498429     748 log.go:172] (0xc000ac8000) Reply frame received for 3\nI0826 22:59:13.498466     748 log.go:172] (0xc000ac8000) (0xc0006a1a40) Create stream\nI0826 22:59:13.498476     748 log.go:172] (0xc000ac8000) (0xc0006a1a40) Stream added, broadcasting: 5\nI0826 22:59:13.499411     748 log.go:172] (0xc000ac8000) Reply frame received for 5\nI0826 22:59:13.552901     748 log.go:172] (0xc000ac8000) Data frame received for 3\nI0826 22:59:13.552928     748 log.go:172] (0xc0008f40a0) (3) Data frame handling\nI0826 22:59:13.552949     748 log.go:172] (0xc0008f40a0) (3) Data frame sent\nI0826 22:59:13.553107     748 log.go:172] (0xc000ac8000) Data frame received for 5\nI0826 22:59:13.553148     748 log.go:172] (0xc0006a1a40) (5) Data frame handling\nI0826 22:59:13.553172     748 log.go:172] (0xc0006a1a40) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0826 22:59:13.553197     748 log.go:172] (0xc000ac8000) Data frame received for 3\nI0826 22:59:13.553220     748 log.go:172] (0xc0008f40a0) (3) Data frame handling\nI0826 22:59:13.553306     748 log.go:172] (0xc000ac8000) Data frame received for 5\nI0826 22:59:13.553324     748 log.go:172] (0xc0006a1a40) (5) Data frame handling\nI0826 22:59:13.554667     748 log.go:172] (0xc000ac8000) Data frame received for 1\nI0826 22:59:13.554692     748 log.go:172] (0xc0002e3400) (1) Data frame handling\nI0826 22:59:13.554722     748 log.go:172] (0xc0002e3400) (1) Data frame sent\nI0826 22:59:13.554745     748 log.go:172] (0xc000ac8000) (0xc0002e3400) Stream removed, broadcasting: 1\nI0826 22:59:13.554770     748 log.go:172] (0xc000ac8000) Go away received\nI0826 22:59:13.555029     748 log.go:172] (0xc000ac8000) (0xc0002e3400) Stream removed, broadcasting: 1\nI0826 22:59:13.555043     748 log.go:172] (0xc000ac8000) (0xc0008f40a0) Stream removed, broadcasting: 3\nI0826 22:59:13.555050     748 log.go:172] (0xc000ac8000) (0xc0006a1a40) Stream removed, broadcasting: 5\n"
Aug 26 22:59:13.566: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Aug 26 22:59:13.566: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Aug 26 22:59:13.566: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9879 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 22:59:13.771: INFO: stderr: "I0826 22:59:13.685148     768 log.go:172] (0xc0009ad1e0) (0xc000974500) Create stream\nI0826 22:59:13.685213     768 log.go:172] (0xc0009ad1e0) (0xc000974500) Stream added, broadcasting: 1\nI0826 22:59:13.687990     768 log.go:172] (0xc0009ad1e0) Reply frame received for 1\nI0826 22:59:13.688034     768 log.go:172] (0xc0009ad1e0) (0xc000bb80a0) Create stream\nI0826 22:59:13.688047     768 log.go:172] (0xc0009ad1e0) (0xc000bb80a0) Stream added, broadcasting: 3\nI0826 22:59:13.689286     768 log.go:172] (0xc0009ad1e0) Reply frame received for 3\nI0826 22:59:13.689344     768 log.go:172] (0xc0009ad1e0) (0xc000a461e0) Create stream\nI0826 22:59:13.689370     768 log.go:172] (0xc0009ad1e0) (0xc000a461e0) Stream added, broadcasting: 5\nI0826 22:59:13.690491     768 log.go:172] (0xc0009ad1e0) Reply frame received for 5\nI0826 22:59:13.759128     768 log.go:172] (0xc0009ad1e0) Data frame received for 3\nI0826 22:59:13.759170     768 log.go:172] (0xc000bb80a0) (3) Data frame handling\nI0826 22:59:13.759184     768 log.go:172] (0xc000bb80a0) (3) Data frame sent\nI0826 22:59:13.759191     768 log.go:172] (0xc0009ad1e0) Data frame received for 3\nI0826 22:59:13.759198     768 log.go:172] (0xc000bb80a0) (3) Data frame handling\nI0826 22:59:13.759222     768 log.go:172] (0xc0009ad1e0) Data frame received for 5\nI0826 22:59:13.759230     768 log.go:172] (0xc000a461e0) (5) Data frame handling\nI0826 22:59:13.759245     768 log.go:172] (0xc000a461e0) (5) Data frame sent\nI0826 22:59:13.759256     768 log.go:172] (0xc0009ad1e0) Data frame received for 5\nI0826 22:59:13.759262     768 log.go:172] (0xc000a461e0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0826 22:59:13.760698     768 log.go:172] (0xc0009ad1e0) Data frame received for 1\nI0826 22:59:13.760863     768 log.go:172] (0xc000974500) (1) Data frame handling\nI0826 22:59:13.760901     768 log.go:172] (0xc000974500) (1) Data frame sent\nI0826 22:59:13.760920     768 log.go:172] (0xc0009ad1e0) (0xc000974500) Stream removed, broadcasting: 1\nI0826 22:59:13.760936     768 log.go:172] (0xc0009ad1e0) Go away received\nI0826 22:59:13.761345     768 log.go:172] (0xc0009ad1e0) (0xc000974500) Stream removed, broadcasting: 1\nI0826 22:59:13.761362     768 log.go:172] (0xc0009ad1e0) (0xc000bb80a0) Stream removed, broadcasting: 3\nI0826 22:59:13.761369     768 log.go:172] (0xc0009ad1e0) (0xc000a461e0) Stream removed, broadcasting: 5\n"
Aug 26 22:59:13.771: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Aug 26 22:59:13.771: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Aug 26 22:59:13.771: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Aug 26 22:59:33.788: INFO: Deleting all statefulset in ns statefulset-9879
Aug 26 22:59:33.791: INFO: Scaling statefulset ss to 0
Aug 26 22:59:33.799: INFO: Waiting for statefulset status.replicas updated to 0
Aug 26 22:59:33.801: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 22:59:33.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-9879" for this suite.

• [SLOW TEST:82.808 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":278,"completed":75,"skipped":1068,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 22:59:33.831: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap configmap-4985/configmap-test-b5a87da5-ac33-49d3-ad6d-6cde4fba2377
STEP: Creating a pod to test consume configMaps
Aug 26 22:59:33.924: INFO: Waiting up to 5m0s for pod "pod-configmaps-a365ecb9-6dc4-4152-9e38-bba0c735afe1" in namespace "configmap-4985" to be "success or failure"
Aug 26 22:59:33.957: INFO: Pod "pod-configmaps-a365ecb9-6dc4-4152-9e38-bba0c735afe1": Phase="Pending", Reason="", readiness=false. Elapsed: 32.56881ms
Aug 26 22:59:36.077: INFO: Pod "pod-configmaps-a365ecb9-6dc4-4152-9e38-bba0c735afe1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.152700727s
Aug 26 22:59:38.080: INFO: Pod "pod-configmaps-a365ecb9-6dc4-4152-9e38-bba0c735afe1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.155950273s
STEP: Saw pod success
Aug 26 22:59:38.080: INFO: Pod "pod-configmaps-a365ecb9-6dc4-4152-9e38-bba0c735afe1" satisfied condition "success or failure"
Aug 26 22:59:38.083: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-a365ecb9-6dc4-4152-9e38-bba0c735afe1 container env-test: 
STEP: delete the pod
Aug 26 22:59:38.103: INFO: Waiting for pod pod-configmaps-a365ecb9-6dc4-4152-9e38-bba0c735afe1 to disappear
Aug 26 22:59:38.108: INFO: Pod pod-configmaps-a365ecb9-6dc4-4152-9e38-bba0c735afe1 no longer exists
[AfterEach] [sig-node] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 22:59:38.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4985" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":76,"skipped":1106,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 22:59:38.117: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Aug 26 22:59:42.361: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 22:59:42.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-6350" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":77,"skipped":1139,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 22:59:42.652: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should have monotonically increasing restart count [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod liveness-42690805-e884-4e18-825e-9dfd769d15f3 in namespace container-probe-9160
Aug 26 22:59:46.783: INFO: Started pod liveness-42690805-e884-4e18-825e-9dfd769d15f3 in namespace container-probe-9160
STEP: checking the pod's current state and verifying that restartCount is present
Aug 26 22:59:46.786: INFO: Initial restart count of pod liveness-42690805-e884-4e18-825e-9dfd769d15f3 is 0
Aug 26 23:00:00.818: INFO: Restart count of pod container-probe-9160/liveness-42690805-e884-4e18-825e-9dfd769d15f3 is now 1 (14.03237267s elapsed)
Aug 26 23:00:21.074: INFO: Restart count of pod container-probe-9160/liveness-42690805-e884-4e18-825e-9dfd769d15f3 is now 2 (34.287612626s elapsed)
Aug 26 23:00:41.127: INFO: Restart count of pod container-probe-9160/liveness-42690805-e884-4e18-825e-9dfd769d15f3 is now 3 (54.340813376s elapsed)
Aug 26 23:01:01.170: INFO: Restart count of pod container-probe-9160/liveness-42690805-e884-4e18-825e-9dfd769d15f3 is now 4 (1m14.384338438s elapsed)
Aug 26 23:02:05.382: INFO: Restart count of pod container-probe-9160/liveness-42690805-e884-4e18-825e-9dfd769d15f3 is now 5 (2m18.596006005s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:02:05.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-9160" for this suite.

• [SLOW TEST:142.756 seconds]
[k8s.io] Probing container
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":278,"completed":78,"skipped":1162,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:02:05.408: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 26 23:02:06.153: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"3cdd4219-d81d-489b-a808-05dca354f5a0", Controller:(*bool)(0xc005405d1a), BlockOwnerDeletion:(*bool)(0xc005405d1b)}}
Aug 26 23:02:06.385: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"4e1652fe-7eb0-4e43-9996-4a493ecd6660", Controller:(*bool)(0xc00550ae22), BlockOwnerDeletion:(*bool)(0xc00550ae23)}}
Aug 26 23:02:06.482: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"59a63966-7f6d-4043-b939-d34c3c3539ee", Controller:(*bool)(0xc005405ed2), BlockOwnerDeletion:(*bool)(0xc005405ed3)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:02:11.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-1981" for this suite.

• [SLOW TEST:6.174 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":278,"completed":79,"skipped":1198,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:02:11.584: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name s-test-opt-del-f098a299-c84b-4423-8611-a6dd515909f4
STEP: Creating secret with name s-test-opt-upd-c311c8d9-227b-4966-bac2-ffea75cf5c27
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-f098a299-c84b-4423-8611-a6dd515909f4
STEP: Updating secret s-test-opt-upd-c311c8d9-227b-4966-bac2-ffea75cf5c27
STEP: Creating secret with name s-test-opt-create-25985c32-9e8b-455d-9cf0-41074aff29bf
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:03:20.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9613" for this suite.

• [SLOW TEST:68.500 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":80,"skipped":1270,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch 
  watch on custom resource definition objects [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:03:20.085: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] watch on custom resource definition objects [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 26 23:03:20.205: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating first CR 
Aug 26 23:03:20.806: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-26T23:03:20Z generation:1 name:name1 resourceVersion:4032298 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:b6502874-6866-41af-8b1e-fad98c87ee34] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Creating second CR
Aug 26 23:03:30.827: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-26T23:03:30Z generation:1 name:name2 resourceVersion:4032346 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:493ff6b6-22b8-4f91-aedf-9bc18a6321ce] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Modifying first CR
Aug 26 23:03:40.834: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-26T23:03:20Z generation:2 name:name1 resourceVersion:4032379 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:b6502874-6866-41af-8b1e-fad98c87ee34] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Modifying second CR
Aug 26 23:03:50.843: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-26T23:03:30Z generation:2 name:name2 resourceVersion:4032409 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:493ff6b6-22b8-4f91-aedf-9bc18a6321ce] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Deleting first CR
Aug 26 23:04:00.851: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-26T23:03:20Z generation:2 name:name1 resourceVersion:4032439 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:b6502874-6866-41af-8b1e-fad98c87ee34] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Deleting second CR
Aug 26 23:04:10.858: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-26T23:03:30Z generation:2 name:name2 resourceVersion:4032469 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:493ff6b6-22b8-4f91-aedf-9bc18a6321ce] num:map[num1:9223372036854775807 num2:1000000]]}
[AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:04:21.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-watch-8590" for this suite.

• [SLOW TEST:61.292 seconds]
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  CustomResourceDefinition Watch
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:41
    watch on custom resource definition objects [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":278,"completed":81,"skipped":1294,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] [sig-node] Events
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:04:21.378: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Aug 26 23:04:25.489: INFO: &Pod{ObjectMeta:{send-events-9ec25452-b56b-451f-9fb2-2d4d79531e29  events-6474 /api/v1/namespaces/events-6474/pods/send-events-9ec25452-b56b-451f-9fb2-2d4d79531e29 1ddaa458-d93c-4612-9200-c75c12aa842e 4032529 0 2020-08-26 23:04:21 +0000 UTC   map[name:foo time:460043172] map[] [] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8sjj8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8sjj8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8sjj8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:04:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:04:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:04:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:04:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.2.198,StartTime:2020-08-26 23:04:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-26 23:04:24 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://d698784ede15898a976de79e1a5090f0dba8b398db21ae0ddadc87ce15b31f1c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.198,},},EphemeralContainerStatuses:[]ContainerStatus{},},}

STEP: checking for scheduler event about the pod
Aug 26 23:04:27.516: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Aug 26 23:04:29.520: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:04:29.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-6474" for this suite.

• [SLOW TEST:8.167 seconds]
[k8s.io] [sig-node] Events
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]","total":278,"completed":82,"skipped":1345,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:04:29.545: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-upd-31b448ca-1cbe-4f20-87c7-4c80aa1bced1
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-31b448ca-1cbe-4f20-87c7-4c80aa1bced1
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:04:35.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3716" for this suite.

• [SLOW TEST:6.245 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":83,"skipped":1367,"failed":0}
S
------------------------------
[k8s.io] Lease 
  lease API should be available [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Lease
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:04:35.790: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename lease-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] lease API should be available [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Lease
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:04:36.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "lease-test-8384" for this suite.
•{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":278,"completed":84,"skipped":1368,"failed":0}
SSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should scale a replication controller  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:04:36.216: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Update Demo
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:325
[It] should scale a replication controller  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a replication controller
Aug 26 23:04:36.311: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5188'
Aug 26 23:04:40.484: INFO: stderr: ""
Aug 26 23:04:40.484: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug 26 23:04:40.484: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5188'
Aug 26 23:04:40.639: INFO: stderr: ""
Aug 26 23:04:40.639: INFO: stdout: "update-demo-nautilus-2kg56 update-demo-nautilus-f7f9w "
Aug 26 23:04:40.640: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2kg56 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5188'
Aug 26 23:04:40.765: INFO: stderr: ""
Aug 26 23:04:40.765: INFO: stdout: ""
Aug 26 23:04:40.765: INFO: update-demo-nautilus-2kg56 is created but not running
Aug 26 23:04:45.765: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5188'
Aug 26 23:04:45.919: INFO: stderr: ""
Aug 26 23:04:45.919: INFO: stdout: "update-demo-nautilus-2kg56 update-demo-nautilus-f7f9w "
Aug 26 23:04:45.919: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2kg56 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5188'
Aug 26 23:04:46.010: INFO: stderr: ""
Aug 26 23:04:46.010: INFO: stdout: "true"
Aug 26 23:04:46.010: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2kg56 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5188'
Aug 26 23:04:46.108: INFO: stderr: ""
Aug 26 23:04:46.108: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 26 23:04:46.108: INFO: validating pod update-demo-nautilus-2kg56
Aug 26 23:04:46.111: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 26 23:04:46.111: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 26 23:04:46.111: INFO: update-demo-nautilus-2kg56 is verified up and running
Aug 26 23:04:46.111: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-f7f9w -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5188'
Aug 26 23:04:46.224: INFO: stderr: ""
Aug 26 23:04:46.224: INFO: stdout: "true"
Aug 26 23:04:46.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-f7f9w -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5188'
Aug 26 23:04:46.339: INFO: stderr: ""
Aug 26 23:04:46.339: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 26 23:04:46.339: INFO: validating pod update-demo-nautilus-f7f9w
Aug 26 23:04:46.342: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 26 23:04:46.342: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 26 23:04:46.342: INFO: update-demo-nautilus-f7f9w is verified up and running
STEP: scaling down the replication controller
Aug 26 23:04:46.344: INFO: scanned /root for discovery docs: 
Aug 26 23:04:46.344: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-5188'
Aug 26 23:04:47.623: INFO: stderr: ""
Aug 26 23:04:47.623: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug 26 23:04:47.623: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5188'
Aug 26 23:04:47.721: INFO: stderr: ""
Aug 26 23:04:47.721: INFO: stdout: "update-demo-nautilus-2kg56 update-demo-nautilus-f7f9w "
STEP: Replicas for name=update-demo: expected=1 actual=2
Aug 26 23:04:52.722: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5188'
Aug 26 23:04:52.828: INFO: stderr: ""
Aug 26 23:04:52.828: INFO: stdout: "update-demo-nautilus-2kg56 update-demo-nautilus-f7f9w "
STEP: Replicas for name=update-demo: expected=1 actual=2
Aug 26 23:04:57.828: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5188'
Aug 26 23:04:57.938: INFO: stderr: ""
Aug 26 23:04:57.938: INFO: stdout: "update-demo-nautilus-2kg56 update-demo-nautilus-f7f9w "
STEP: Replicas for name=update-demo: expected=1 actual=2
Aug 26 23:05:02.939: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5188'
Aug 26 23:05:03.035: INFO: stderr: ""
Aug 26 23:05:03.035: INFO: stdout: "update-demo-nautilus-2kg56 "
Aug 26 23:05:03.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2kg56 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5188'
Aug 26 23:05:03.128: INFO: stderr: ""
Aug 26 23:05:03.128: INFO: stdout: "true"
Aug 26 23:05:03.128: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2kg56 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5188'
Aug 26 23:05:03.221: INFO: stderr: ""
Aug 26 23:05:03.221: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 26 23:05:03.221: INFO: validating pod update-demo-nautilus-2kg56
Aug 26 23:05:03.224: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 26 23:05:03.224: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 26 23:05:03.224: INFO: update-demo-nautilus-2kg56 is verified up and running
STEP: scaling up the replication controller
Aug 26 23:05:03.225: INFO: scanned /root for discovery docs: 
Aug 26 23:05:03.226: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-5188'
Aug 26 23:05:04.371: INFO: stderr: ""
Aug 26 23:05:04.371: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug 26 23:05:04.371: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5188'
Aug 26 23:05:04.464: INFO: stderr: ""
Aug 26 23:05:04.464: INFO: stdout: "update-demo-nautilus-2kg56 update-demo-nautilus-kwtn5 "
Aug 26 23:05:04.464: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2kg56 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5188'
Aug 26 23:05:04.550: INFO: stderr: ""
Aug 26 23:05:04.550: INFO: stdout: "true"
Aug 26 23:05:04.550: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2kg56 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5188'
Aug 26 23:05:04.639: INFO: stderr: ""
Aug 26 23:05:04.640: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 26 23:05:04.640: INFO: validating pod update-demo-nautilus-2kg56
Aug 26 23:05:04.643: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 26 23:05:04.643: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 26 23:05:04.643: INFO: update-demo-nautilus-2kg56 is verified up and running
Aug 26 23:05:04.643: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kwtn5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5188'
Aug 26 23:05:04.724: INFO: stderr: ""
Aug 26 23:05:04.724: INFO: stdout: ""
Aug 26 23:05:04.724: INFO: update-demo-nautilus-kwtn5 is created but not running
Aug 26 23:05:09.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5188'
Aug 26 23:05:09.822: INFO: stderr: ""
Aug 26 23:05:09.822: INFO: stdout: "update-demo-nautilus-2kg56 update-demo-nautilus-kwtn5 "
Aug 26 23:05:09.822: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2kg56 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5188'
Aug 26 23:05:09.914: INFO: stderr: ""
Aug 26 23:05:09.914: INFO: stdout: "true"
Aug 26 23:05:09.914: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2kg56 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5188'
Aug 26 23:05:10.006: INFO: stderr: ""
Aug 26 23:05:10.006: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 26 23:05:10.006: INFO: validating pod update-demo-nautilus-2kg56
Aug 26 23:05:10.009: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 26 23:05:10.009: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 26 23:05:10.009: INFO: update-demo-nautilus-2kg56 is verified up and running
Aug 26 23:05:10.009: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kwtn5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5188'
Aug 26 23:05:10.142: INFO: stderr: ""
Aug 26 23:05:10.142: INFO: stdout: "true"
Aug 26 23:05:10.142: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kwtn5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5188'
Aug 26 23:05:10.247: INFO: stderr: ""
Aug 26 23:05:10.247: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 26 23:05:10.247: INFO: validating pod update-demo-nautilus-kwtn5
Aug 26 23:05:10.250: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 26 23:05:10.250: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 26 23:05:10.250: INFO: update-demo-nautilus-kwtn5 is verified up and running
STEP: using delete to clean up resources
Aug 26 23:05:10.250: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5188'
Aug 26 23:05:10.367: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 26 23:05:10.367: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Aug 26 23:05:10.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-5188'
Aug 26 23:05:10.478: INFO: stderr: "No resources found in kubectl-5188 namespace.\n"
Aug 26 23:05:10.478: INFO: stdout: ""
Aug 26 23:05:10.478: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-5188 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Aug 26 23:05:10.575: INFO: stderr: ""
Aug 26 23:05:10.575: INFO: stdout: "update-demo-nautilus-2kg56\nupdate-demo-nautilus-kwtn5\n"
Aug 26 23:05:11.075: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-5188'
Aug 26 23:05:11.182: INFO: stderr: "No resources found in kubectl-5188 namespace.\n"
Aug 26 23:05:11.182: INFO: stdout: ""
Aug 26 23:05:11.182: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-5188 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Aug 26 23:05:11.284: INFO: stderr: ""
Aug 26 23:05:11.284: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:05:11.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5188" for this suite.

• [SLOW TEST:35.076 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:323
    should scale a replication controller  [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","total":278,"completed":85,"skipped":1372,"failed":0}
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:05:11.292: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating the pod
Aug 26 23:05:16.033: INFO: Successfully updated pod "annotationupdate483f8599-7141-425c-9af7-b7ac98492211"
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:05:20.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4930" for this suite.

• [SLOW TEST:8.861 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":86,"skipped":1372,"failed":0}
SSSSS
------------------------------
[sig-apps] Job 
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Job
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:05:20.153: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:05:38.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-6763" for this suite.

• [SLOW TEST:18.199 seconds]
[sig-apps] Job
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":278,"completed":87,"skipped":1377,"failed":0}
SS
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:05:38.352: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to change the type from ExternalName to ClusterIP [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a service externalname-service with the type=ExternalName in namespace services-6107
STEP: changing the ExternalName service to type=ClusterIP
STEP: creating replication controller externalname-service in namespace services-6107
I0826 23:05:38.577817       6 runners.go:189] Created replication controller with name: externalname-service, namespace: services-6107, replica count: 2
I0826 23:05:41.628292       6 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0826 23:05:44.628526       6 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Aug 26 23:05:44.628: INFO: Creating new exec pod
Aug 26 23:05:49.641: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-6107 execpod8ss2l -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Aug 26 23:05:50.034: INFO: stderr: "I0826 23:05:49.945809    1413 log.go:172] (0xc0005226e0) (0xc000488000) Create stream\nI0826 23:05:49.945876    1413 log.go:172] (0xc0005226e0) (0xc000488000) Stream added, broadcasting: 1\nI0826 23:05:49.948277    1413 log.go:172] (0xc0005226e0) Reply frame received for 1\nI0826 23:05:49.948326    1413 log.go:172] (0xc0005226e0) (0xc00067dae0) Create stream\nI0826 23:05:49.948341    1413 log.go:172] (0xc0005226e0) (0xc00067dae0) Stream added, broadcasting: 3\nI0826 23:05:49.949498    1413 log.go:172] (0xc0005226e0) Reply frame received for 3\nI0826 23:05:49.949516    1413 log.go:172] (0xc0005226e0) (0xc000488140) Create stream\nI0826 23:05:49.949522    1413 log.go:172] (0xc0005226e0) (0xc000488140) Stream added, broadcasting: 5\nI0826 23:05:49.950552    1413 log.go:172] (0xc0005226e0) Reply frame received for 5\nI0826 23:05:50.023262    1413 log.go:172] (0xc0005226e0) Data frame received for 5\nI0826 23:05:50.023297    1413 log.go:172] (0xc000488140) (5) Data frame handling\nI0826 23:05:50.023320    1413 log.go:172] (0xc000488140) (5) Data frame sent\nI0826 23:05:50.023334    1413 log.go:172] (0xc0005226e0) Data frame received for 5\nI0826 23:05:50.023345    1413 log.go:172] (0xc000488140) (5) Data frame handling\n+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0826 23:05:50.023368    1413 log.go:172] (0xc000488140) (5) Data frame sent\nI0826 23:05:50.023548    1413 log.go:172] (0xc0005226e0) Data frame received for 5\nI0826 23:05:50.023570    1413 log.go:172] (0xc000488140) (5) Data frame handling\nI0826 23:05:50.023847    1413 log.go:172] (0xc0005226e0) Data frame received for 3\nI0826 23:05:50.023870    1413 log.go:172] (0xc00067dae0) (3) Data frame handling\nI0826 23:05:50.025593    1413 log.go:172] (0xc0005226e0) Data frame received for 1\nI0826 23:05:50.025611    1413 log.go:172] (0xc000488000) (1) Data frame handling\nI0826 23:05:50.025629    1413 log.go:172] (0xc000488000) (1) Data frame sent\nI0826 23:05:50.025646    1413 log.go:172] (0xc0005226e0) (0xc000488000) Stream removed, broadcasting: 1\nI0826 23:05:50.025699    1413 log.go:172] (0xc0005226e0) Go away received\nI0826 23:05:50.025964    1413 log.go:172] (0xc0005226e0) (0xc000488000) Stream removed, broadcasting: 1\nI0826 23:05:50.025979    1413 log.go:172] (0xc0005226e0) (0xc00067dae0) Stream removed, broadcasting: 3\nI0826 23:05:50.025986    1413 log.go:172] (0xc0005226e0) (0xc000488140) Stream removed, broadcasting: 5\n"
Aug 26 23:05:50.034: INFO: stdout: ""
Aug 26 23:05:50.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-6107 execpod8ss2l -- /bin/sh -x -c nc -zv -t -w 2 10.102.23.110 80'
Aug 26 23:05:50.239: INFO: stderr: "I0826 23:05:50.153638    1433 log.go:172] (0xc0001051e0) (0xc0006d1d60) Create stream\nI0826 23:05:50.153698    1433 log.go:172] (0xc0001051e0) (0xc0006d1d60) Stream added, broadcasting: 1\nI0826 23:05:50.157784    1433 log.go:172] (0xc0001051e0) Reply frame received for 1\nI0826 23:05:50.157986    1433 log.go:172] (0xc0001051e0) (0xc0006d1e00) Create stream\nI0826 23:05:50.158027    1433 log.go:172] (0xc0001051e0) (0xc0006d1e00) Stream added, broadcasting: 3\nI0826 23:05:50.162314    1433 log.go:172] (0xc0001051e0) Reply frame received for 3\nI0826 23:05:50.162528    1433 log.go:172] (0xc0001051e0) (0xc0006d1ea0) Create stream\nI0826 23:05:50.162605    1433 log.go:172] (0xc0001051e0) (0xc0006d1ea0) Stream added, broadcasting: 5\nI0826 23:05:50.166409    1433 log.go:172] (0xc0001051e0) Reply frame received for 5\nI0826 23:05:50.227726    1433 log.go:172] (0xc0001051e0) Data frame received for 3\nI0826 23:05:50.227755    1433 log.go:172] (0xc0006d1e00) (3) Data frame handling\nI0826 23:05:50.227777    1433 log.go:172] (0xc0001051e0) Data frame received for 5\nI0826 23:05:50.227795    1433 log.go:172] (0xc0006d1ea0) (5) Data frame handling\nI0826 23:05:50.227814    1433 log.go:172] (0xc0006d1ea0) (5) Data frame sent\nI0826 23:05:50.227823    1433 log.go:172] (0xc0001051e0) Data frame received for 5\nI0826 23:05:50.227830    1433 log.go:172] (0xc0006d1ea0) (5) Data frame handling\n+ nc -zv -t -w 2 10.102.23.110 80\nConnection to 10.102.23.110 80 port [tcp/http] succeeded!\nI0826 23:05:50.229361    1433 log.go:172] (0xc0001051e0) Data frame received for 1\nI0826 23:05:50.229387    1433 log.go:172] (0xc0006d1d60) (1) Data frame handling\nI0826 23:05:50.229402    1433 log.go:172] (0xc0006d1d60) (1) Data frame sent\nI0826 23:05:50.229420    1433 log.go:172] (0xc0001051e0) (0xc0006d1d60) Stream removed, broadcasting: 1\nI0826 23:05:50.229666    1433 log.go:172] (0xc0001051e0) Go away received\nI0826 23:05:50.229791    1433 log.go:172] (0xc0001051e0) (0xc0006d1d60) Stream removed, broadcasting: 1\nI0826 23:05:50.229823    1433 log.go:172] (0xc0001051e0) (0xc0006d1e00) Stream removed, broadcasting: 3\nI0826 23:05:50.229844    1433 log.go:172] (0xc0001051e0) (0xc0006d1ea0) Stream removed, broadcasting: 5\n"
Aug 26 23:05:50.239: INFO: stdout: ""
Aug 26 23:05:50.239: INFO: Cleaning up the ExternalName to ClusterIP test service
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:05:50.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-6107" for this suite.
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:11.917 seconds]
[sig-network] Services
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":278,"completed":88,"skipped":1379,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:05:50.269: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 26 23:05:50.344: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7c53fa6e-24a9-4182-9afb-989148573a13" in namespace "projected-3583" to be "success or failure"
Aug 26 23:05:50.365: INFO: Pod "downwardapi-volume-7c53fa6e-24a9-4182-9afb-989148573a13": Phase="Pending", Reason="", readiness=false. Elapsed: 21.311686ms
Aug 26 23:05:52.380: INFO: Pod "downwardapi-volume-7c53fa6e-24a9-4182-9afb-989148573a13": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035516856s
Aug 26 23:05:54.385: INFO: Pod "downwardapi-volume-7c53fa6e-24a9-4182-9afb-989148573a13": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.04130598s
STEP: Saw pod success
Aug 26 23:05:54.385: INFO: Pod "downwardapi-volume-7c53fa6e-24a9-4182-9afb-989148573a13" satisfied condition "success or failure"
Aug 26 23:05:54.388: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-7c53fa6e-24a9-4182-9afb-989148573a13 container client-container: 
STEP: delete the pod
Aug 26 23:05:54.405: INFO: Waiting for pod downwardapi-volume-7c53fa6e-24a9-4182-9afb-989148573a13 to disappear
Aug 26 23:05:54.463: INFO: Pod downwardapi-volume-7c53fa6e-24a9-4182-9afb-989148573a13 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:05:54.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3583" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":89,"skipped":1416,"failed":0}
SSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:05:54.472: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-d343a9b1-e430-4282-b417-658e74fda197
STEP: Creating a pod to test consume configMaps
Aug 26 23:05:54.543: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-33904b21-fbae-48b4-8e5a-3d62811d52d7" in namespace "projected-5090" to be "success or failure"
Aug 26 23:05:54.547: INFO: Pod "pod-projected-configmaps-33904b21-fbae-48b4-8e5a-3d62811d52d7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.739665ms
Aug 26 23:05:56.625: INFO: Pod "pod-projected-configmaps-33904b21-fbae-48b4-8e5a-3d62811d52d7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.082439378s
Aug 26 23:05:58.685: INFO: Pod "pod-projected-configmaps-33904b21-fbae-48b4-8e5a-3d62811d52d7": Phase="Running", Reason="", readiness=true. Elapsed: 4.142140544s
Aug 26 23:06:00.689: INFO: Pod "pod-projected-configmaps-33904b21-fbae-48b4-8e5a-3d62811d52d7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.145730438s
STEP: Saw pod success
Aug 26 23:06:00.689: INFO: Pod "pod-projected-configmaps-33904b21-fbae-48b4-8e5a-3d62811d52d7" satisfied condition "success or failure"
Aug 26 23:06:00.691: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-33904b21-fbae-48b4-8e5a-3d62811d52d7 container projected-configmap-volume-test: 
STEP: delete the pod
Aug 26 23:06:00.828: INFO: Waiting for pod pod-projected-configmaps-33904b21-fbae-48b4-8e5a-3d62811d52d7 to disappear
Aug 26 23:06:01.067: INFO: Pod pod-projected-configmaps-33904b21-fbae-48b4-8e5a-3d62811d52d7 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:06:01.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5090" for this suite.

• [SLOW TEST:6.729 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":278,"completed":90,"skipped":1423,"failed":0}
SSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:06:01.201: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Aug 26 23:06:01.494: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-3488 /api/v1/namespaces/watch-3488/configmaps/e2e-watch-test-label-changed 0b633bcd-33a3-4f04-be8a-32d9505000f6 4033200 0 2020-08-26 23:06:01 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Aug 26 23:06:01.494: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-3488 /api/v1/namespaces/watch-3488/configmaps/e2e-watch-test-label-changed 0b633bcd-33a3-4f04-be8a-32d9505000f6 4033201 0 2020-08-26 23:06:01 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Aug 26 23:06:01.494: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-3488 /api/v1/namespaces/watch-3488/configmaps/e2e-watch-test-label-changed 0b633bcd-33a3-4f04-be8a-32d9505000f6 4033202 0 2020-08-26 23:06:01 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Aug 26 23:06:11.538: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-3488 /api/v1/namespaces/watch-3488/configmaps/e2e-watch-test-label-changed 0b633bcd-33a3-4f04-be8a-32d9505000f6 4033243 0 2020-08-26 23:06:01 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Aug 26 23:06:11.538: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-3488 /api/v1/namespaces/watch-3488/configmaps/e2e-watch-test-label-changed 0b633bcd-33a3-4f04-be8a-32d9505000f6 4033244 0 2020-08-26 23:06:01 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Aug 26 23:06:11.538: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-3488 /api/v1/namespaces/watch-3488/configmaps/e2e-watch-test-label-changed 0b633bcd-33a3-4f04-be8a-32d9505000f6 4033245 0 2020-08-26 23:06:01 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:06:11.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-3488" for this suite.

• [SLOW TEST:10.393 seconds]
[sig-api-machinery] Watchers
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":278,"completed":91,"skipped":1426,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicationController
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:06:11.595: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating replication controller my-hostname-basic-3b123b50-9d80-4599-97ce-a4bf900a8853
Aug 26 23:06:11.727: INFO: Pod name my-hostname-basic-3b123b50-9d80-4599-97ce-a4bf900a8853: Found 0 pods out of 1
Aug 26 23:06:16.781: INFO: Pod name my-hostname-basic-3b123b50-9d80-4599-97ce-a4bf900a8853: Found 1 pods out of 1
Aug 26 23:06:16.781: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-3b123b50-9d80-4599-97ce-a4bf900a8853" are running
Aug 26 23:06:16.784: INFO: Pod "my-hostname-basic-3b123b50-9d80-4599-97ce-a4bf900a8853-628rk" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-26 23:06:11 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-26 23:06:14 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-26 23:06:14 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-26 23:06:11 +0000 UTC Reason: Message:}])
Aug 26 23:06:16.784: INFO: Trying to dial the pod
Aug 26 23:06:21.794: INFO: Controller my-hostname-basic-3b123b50-9d80-4599-97ce-a4bf900a8853: Got expected result from replica 1 [my-hostname-basic-3b123b50-9d80-4599-97ce-a4bf900a8853-628rk]: "my-hostname-basic-3b123b50-9d80-4599-97ce-a4bf900a8853-628rk", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:06:21.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-9612" for this suite.

• [SLOW TEST:10.205 seconds]
[sig-apps] ReplicationController
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","total":278,"completed":92,"skipped":1445,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:06:21.801: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Aug 26 23:06:21.838: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Aug 26 23:06:21.860: INFO: Waiting for terminating namespaces to be deleted...
Aug 26 23:06:21.863: INFO: 
Logging pods the kubelet thinks is on node jerma-worker before test
Aug 26 23:06:21.869: INFO: kube-proxy-lgd85 from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 26 23:06:21.869: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 26 23:06:21.869: INFO: kindnet-tfrcx from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 26 23:06:21.869: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 26 23:06:21.869: INFO: daemon-set-4l8wc from daemonsets-9371 started at 2020-08-20 20:09:22 +0000 UTC (1 container statuses recorded)
Aug 26 23:06:21.869: INFO: 	Container app ready: true, restart count 0
Aug 26 23:06:21.869: INFO: my-hostname-basic-3b123b50-9d80-4599-97ce-a4bf900a8853-628rk from replication-controller-9612 started at 2020-08-26 23:06:11 +0000 UTC (1 container statuses recorded)
Aug 26 23:06:21.869: INFO: 	Container my-hostname-basic-3b123b50-9d80-4599-97ce-a4bf900a8853 ready: true, restart count 0
Aug 26 23:06:21.869: INFO: 
Logging pods the kubelet thinks is on node jerma-worker2 before test
Aug 26 23:06:21.893: INFO: kube-proxy-ckhpn from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 26 23:06:21.893: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 26 23:06:21.893: INFO: test-recreate-deployment-5f94c574ff-k4dkm from deployment-5601 started at 2020-08-23 04:50:56 +0000 UTC (1 container statuses recorded)
Aug 26 23:06:21.893: INFO: 	Container httpd ready: true, restart count 0
Aug 26 23:06:21.893: INFO: kindnet-gxck9 from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 26 23:06:21.893: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 26 23:06:21.893: INFO: daemon-set-cxv46 from daemonsets-9371 started at 2020-08-20 20:09:22 +0000 UTC (1 container statuses recorded)
Aug 26 23:06:21.893: INFO: 	Container app ready: true, restart count 0
[It] validates that NodeSelector is respected if matching  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-eaba9cd7-eb06-48e2-88ec-85b11c23ae4d 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-eaba9cd7-eb06-48e2-88ec-85b11c23ae4d off the node jerma-worker2
STEP: verifying the node doesn't have the label kubernetes.io/e2e-eaba9cd7-eb06-48e2-88ec-85b11c23ae4d
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:06:30.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-1898" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77

• [SLOW TEST:8.249 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that NodeSelector is respected if matching  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching  [Conformance]","total":278,"completed":93,"skipped":1469,"failed":0}
SSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:06:30.050: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-8062
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a new StatefulSet
Aug 26 23:06:30.151: INFO: Found 0 stateful pods, waiting for 3
Aug 26 23:06:40.156: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 26 23:06:40.156: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 26 23:06:40.156: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Aug 26 23:06:50.163: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 26 23:06:50.164: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 26 23:06:50.164: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine
Aug 26 23:06:50.349: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Aug 26 23:07:00.431: INFO: Updating stateful set ss2
Aug 26 23:07:00.567: INFO: Waiting for Pod statefulset-8062/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Aug 26 23:07:10.575: INFO: Waiting for Pod statefulset-8062/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
STEP: Restoring Pods to the correct revision when they are deleted
Aug 26 23:07:20.783: INFO: Found 2 stateful pods, waiting for 3
Aug 26 23:07:30.788: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 26 23:07:30.788: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 26 23:07:30.788: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Aug 26 23:07:30.813: INFO: Updating stateful set ss2
Aug 26 23:07:30.846: INFO: Waiting for Pod statefulset-8062/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Aug 26 23:07:40.855: INFO: Waiting for Pod statefulset-8062/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Aug 26 23:07:50.872: INFO: Updating stateful set ss2
Aug 26 23:07:50.958: INFO: Waiting for StatefulSet statefulset-8062/ss2 to complete update
Aug 26 23:07:50.958: INFO: Waiting for Pod statefulset-8062/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Aug 26 23:08:00.966: INFO: Waiting for StatefulSet statefulset-8062/ss2 to complete update
Aug 26 23:08:00.966: INFO: Waiting for Pod statefulset-8062/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Aug 26 23:08:10.971: INFO: Deleting all statefulset in ns statefulset-8062
Aug 26 23:08:10.973: INFO: Scaling statefulset ss2 to 0
Aug 26 23:08:30.993: INFO: Waiting for statefulset status.replicas updated to 0
Aug 26 23:08:30.996: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:08:31.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-8062" for this suite.

• [SLOW TEST:120.963 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":278,"completed":94,"skipped":1473,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields at the schema root [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:08:31.014: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD preserving unknown fields at the schema root [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 26 23:08:31.092: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Aug 26 23:08:32.998: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9791 create -f -'
Aug 26 23:08:38.312: INFO: stderr: ""
Aug 26 23:08:38.312: INFO: stdout: "e2e-test-crd-publish-openapi-7239-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n"
Aug 26 23:08:38.312: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9791 delete e2e-test-crd-publish-openapi-7239-crds test-cr'
Aug 26 23:08:38.454: INFO: stderr: ""
Aug 26 23:08:38.454: INFO: stdout: "e2e-test-crd-publish-openapi-7239-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n"
Aug 26 23:08:38.454: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9791 apply -f -'
Aug 26 23:08:38.706: INFO: stderr: ""
Aug 26 23:08:38.706: INFO: stdout: "e2e-test-crd-publish-openapi-7239-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n"
Aug 26 23:08:38.706: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9791 delete e2e-test-crd-publish-openapi-7239-crds test-cr'
Aug 26 23:08:38.825: INFO: stderr: ""
Aug 26 23:08:38.825: INFO: stdout: "e2e-test-crd-publish-openapi-7239-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR
Aug 26 23:08:38.825: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7239-crds'
Aug 26 23:08:39.094: INFO: stderr: ""
Aug 26 23:08:39.095: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-7239-crd\nVERSION:  crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n     \n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:08:41.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-9791" for this suite.

• [SLOW TEST:10.977 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields at the schema root [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":278,"completed":95,"skipped":1484,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:08:41.991: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod liveness-592b518e-a71f-45df-b922-3f63c84ecb78 in namespace container-probe-8158
Aug 26 23:08:46.083: INFO: Started pod liveness-592b518e-a71f-45df-b922-3f63c84ecb78 in namespace container-probe-8158
STEP: checking the pod's current state and verifying that restartCount is present
Aug 26 23:08:46.086: INFO: Initial restart count of pod liveness-592b518e-a71f-45df-b922-3f63c84ecb78 is 0
Aug 26 23:09:04.375: INFO: Restart count of pod container-probe-8158/liveness-592b518e-a71f-45df-b922-3f63c84ecb78 is now 1 (18.289405791s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:09:04.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-8158" for this suite.

• [SLOW TEST:22.699 seconds]
[k8s.io] Probing container
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":96,"skipped":1527,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should honor timeout [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:09:04.690: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 26 23:09:05.792: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 26 23:09:07.817: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734080145, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734080145, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734080146, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734080145, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 23:09:09.828: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734080145, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734080145, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734080146, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734080145, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 26 23:09:12.848: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should honor timeout [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Setting timeout (1s) shorter than webhook latency (5s)
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s)
STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is longer than webhook latency
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is empty (defaulted to 10s in v1)
STEP: Registering slow webhook via the AdmissionRegistration API
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:09:25.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-6463" for this suite.
STEP: Destroying namespace "webhook-6463-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:20.785 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should honor timeout [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":278,"completed":97,"skipped":1532,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  removes definition from spec when one version gets changed to not be served [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:09:25.475: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] removes definition from spec when one version gets changed to not be served [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: set up a multi version CRD
Aug 26 23:09:25.581: INFO: >>> kubeConfig: /root/.kube/config
STEP: mark a version not serverd
STEP: check the unserved version gets removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:09:40.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-8966" for this suite.

• [SLOW TEST:15.307 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  removes definition from spec when one version gets changed to not be served [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":278,"completed":98,"skipped":1548,"failed":0}
S
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:09:40.783: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] deployment should delete old replica sets [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 26 23:09:40.876: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Aug 26 23:09:45.881: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Aug 26 23:09:45.881: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Aug 26 23:09:50.143: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:{test-cleanup-deployment  deployment-3072 /apis/apps/v1/namespaces/deployment-3072/deployments/test-cleanup-deployment a92b553b-8d85-417f-8574-62e644c204d9 4034417 1 2020-08-26 23:09:45 +0000 UTC   map[name:cleanup-pod] map[deployment.kubernetes.io/revision:1] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0031e8c88  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-08-26 23:09:46 +0000 UTC,LastTransitionTime:2020-08-26 23:09:46 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-cleanup-deployment-55ffc6b7b6" has successfully progressed.,LastUpdateTime:2020-08-26 23:09:49 +0000 UTC,LastTransitionTime:2020-08-26 23:09:45 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

Aug 26 23:09:50.146: INFO: New ReplicaSet "test-cleanup-deployment-55ffc6b7b6" of Deployment "test-cleanup-deployment":
&ReplicaSet{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6  deployment-3072 /apis/apps/v1/namespaces/deployment-3072/replicasets/test-cleanup-deployment-55ffc6b7b6 38571215-a78a-47db-b85f-ccb40fb82166 4034404 1 2020-08-26 23:09:45 +0000 UTC   map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment a92b553b-8d85-417f-8574-62e644c204d9 0xc00323ee47 0xc00323ee48}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55ffc6b7b6,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00323eec8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Aug 26 23:09:50.149: INFO: Pod "test-cleanup-deployment-55ffc6b7b6-rt27p" is available:
&Pod{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6-rt27p test-cleanup-deployment-55ffc6b7b6- deployment-3072 /api/v1/namespaces/deployment-3072/pods/test-cleanup-deployment-55ffc6b7b6-rt27p f771d9c4-7f4e-4ecf-b0dc-615963dabe77 4034403 0 2020-08-26 23:09:45 +0000 UTC   map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-55ffc6b7b6 38571215-a78a-47db-b85f-ccb40fb82166 0xc003216ef7 0xc003216ef8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-skxlf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-skxlf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-skxlf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:09:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:09:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:09:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:09:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:10.244.1.56,StartTime:2020-08-26 23:09:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-26 23:09:49 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://1654e875d759033d018b34353c6521217eef56fd050f72f35b564d102b39f1bd,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.56,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:09:50.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-3072" for this suite.

• [SLOW TEST:9.372 seconds]
[sig-apps] Deployment
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":278,"completed":99,"skipped":1549,"failed":0}
SSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:09:50.155: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name projected-secret-test-1f03c400-1dd2-418d-a335-eefdf9e7b3fe
STEP: Creating a pod to test consume secrets
Aug 26 23:09:50.290: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-7b740f27-db35-474c-98a5-e3d4391780b2" in namespace "projected-8713" to be "success or failure"
Aug 26 23:09:50.293: INFO: Pod "pod-projected-secrets-7b740f27-db35-474c-98a5-e3d4391780b2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.940786ms
Aug 26 23:09:52.297: INFO: Pod "pod-projected-secrets-7b740f27-db35-474c-98a5-e3d4391780b2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007241315s
Aug 26 23:09:54.341: INFO: Pod "pod-projected-secrets-7b740f27-db35-474c-98a5-e3d4391780b2": Phase="Running", Reason="", readiness=true. Elapsed: 4.05134663s
Aug 26 23:09:56.345: INFO: Pod "pod-projected-secrets-7b740f27-db35-474c-98a5-e3d4391780b2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.055215396s
STEP: Saw pod success
Aug 26 23:09:56.345: INFO: Pod "pod-projected-secrets-7b740f27-db35-474c-98a5-e3d4391780b2" satisfied condition "success or failure"
Aug 26 23:09:56.348: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-7b740f27-db35-474c-98a5-e3d4391780b2 container secret-volume-test: 
STEP: delete the pod
Aug 26 23:09:56.401: INFO: Waiting for pod pod-projected-secrets-7b740f27-db35-474c-98a5-e3d4391780b2 to disappear
Aug 26 23:09:56.416: INFO: Pod pod-projected-secrets-7b740f27-db35-474c-98a5-e3d4391780b2 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:09:56.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8713" for this suite.

• [SLOW TEST:6.269 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":100,"skipped":1552,"failed":0}
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:09:56.424: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0666 on node default medium
Aug 26 23:09:56.502: INFO: Waiting up to 5m0s for pod "pod-bb7d2f5c-2588-4c24-ac9d-c7fb7f7327bc" in namespace "emptydir-9735" to be "success or failure"
Aug 26 23:09:56.506: INFO: Pod "pod-bb7d2f5c-2588-4c24-ac9d-c7fb7f7327bc": Phase="Pending", Reason="", readiness=false. Elapsed: 3.370451ms
Aug 26 23:09:58.509: INFO: Pod "pod-bb7d2f5c-2588-4c24-ac9d-c7fb7f7327bc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006971638s
Aug 26 23:10:00.513: INFO: Pod "pod-bb7d2f5c-2588-4c24-ac9d-c7fb7f7327bc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01100791s
STEP: Saw pod success
Aug 26 23:10:00.513: INFO: Pod "pod-bb7d2f5c-2588-4c24-ac9d-c7fb7f7327bc" satisfied condition "success or failure"
Aug 26 23:10:00.516: INFO: Trying to get logs from node jerma-worker pod pod-bb7d2f5c-2588-4c24-ac9d-c7fb7f7327bc container test-container: 
STEP: delete the pod
Aug 26 23:10:00.536: INFO: Waiting for pod pod-bb7d2f5c-2588-4c24-ac9d-c7fb7f7327bc to disappear
Aug 26 23:10:00.542: INFO: Pod pod-bb7d2f5c-2588-4c24-ac9d-c7fb7f7327bc no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:10:00.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9735" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":101,"skipped":1555,"failed":0}
SSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:10:00.549: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap that has name configmap-test-emptyKey-3cee0cab-4b5c-45c6-af92-baa067f8f6f6
[AfterEach] [sig-node] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:10:00.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-84" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":278,"completed":102,"skipped":1563,"failed":0}
SSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:10:00.626: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name projected-secret-test-4e0816c9-33cc-4974-917a-83cc7ce8b091
STEP: Creating a pod to test consume secrets
Aug 26 23:10:00.755: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-602886e0-c2eb-4220-b54d-4faee77bd19d" in namespace "projected-9577" to be "success or failure"
Aug 26 23:10:00.795: INFO: Pod "pod-projected-secrets-602886e0-c2eb-4220-b54d-4faee77bd19d": Phase="Pending", Reason="", readiness=false. Elapsed: 40.664037ms
Aug 26 23:10:02.821: INFO: Pod "pod-projected-secrets-602886e0-c2eb-4220-b54d-4faee77bd19d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065710204s
Aug 26 23:10:04.898: INFO: Pod "pod-projected-secrets-602886e0-c2eb-4220-b54d-4faee77bd19d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.142744623s
STEP: Saw pod success
Aug 26 23:10:04.898: INFO: Pod "pod-projected-secrets-602886e0-c2eb-4220-b54d-4faee77bd19d" satisfied condition "success or failure"
Aug 26 23:10:04.901: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-602886e0-c2eb-4220-b54d-4faee77bd19d container projected-secret-volume-test: 
STEP: delete the pod
Aug 26 23:10:04.936: INFO: Waiting for pod pod-projected-secrets-602886e0-c2eb-4220-b54d-4faee77bd19d to disappear
Aug 26 23:10:04.971: INFO: Pod pod-projected-secrets-602886e0-c2eb-4220-b54d-4faee77bd19d no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:10:04.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9577" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":103,"skipped":1568,"failed":0}

------------------------------
[sig-network] Services 
  should be able to change the type from NodePort to ExternalName [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:10:04.978: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to change the type from NodePort to ExternalName [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a service nodeport-service with the type=NodePort in namespace services-5983
STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service
STEP: creating service externalsvc in namespace services-5983
STEP: creating replication controller externalsvc in namespace services-5983
I0826 23:10:05.346372       6 runners.go:189] Created replication controller with name: externalsvc, namespace: services-5983, replica count: 2
I0826 23:10:08.396953       6 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0826 23:10:11.397247       6 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: changing the NodePort service to type=ExternalName
Aug 26 23:10:11.473: INFO: Creating new exec pod
Aug 26 23:10:15.527: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-5983 execpodvf2xn -- /bin/sh -x -c nslookup nodeport-service'
Aug 26 23:10:15.762: INFO: stderr: "I0826 23:10:15.661742    1561 log.go:172] (0xc000ae4f20) (0xc000a94320) Create stream\nI0826 23:10:15.661799    1561 log.go:172] (0xc000ae4f20) (0xc000a94320) Stream added, broadcasting: 1\nI0826 23:10:15.663616    1561 log.go:172] (0xc000ae4f20) Reply frame received for 1\nI0826 23:10:15.663649    1561 log.go:172] (0xc000ae4f20) (0xc000a943c0) Create stream\nI0826 23:10:15.663659    1561 log.go:172] (0xc000ae4f20) (0xc000a943c0) Stream added, broadcasting: 3\nI0826 23:10:15.664349    1561 log.go:172] (0xc000ae4f20) Reply frame received for 3\nI0826 23:10:15.664378    1561 log.go:172] (0xc000ae4f20) (0xc000aa8320) Create stream\nI0826 23:10:15.664387    1561 log.go:172] (0xc000ae4f20) (0xc000aa8320) Stream added, broadcasting: 5\nI0826 23:10:15.665259    1561 log.go:172] (0xc000ae4f20) Reply frame received for 5\nI0826 23:10:15.741476    1561 log.go:172] (0xc000ae4f20) Data frame received for 5\nI0826 23:10:15.741505    1561 log.go:172] (0xc000aa8320) (5) Data frame handling\nI0826 23:10:15.741524    1561 log.go:172] (0xc000aa8320) (5) Data frame sent\n+ nslookup nodeport-service\nI0826 23:10:15.746679    1561 log.go:172] (0xc000ae4f20) Data frame received for 3\nI0826 23:10:15.746694    1561 log.go:172] (0xc000a943c0) (3) Data frame handling\nI0826 23:10:15.746732    1561 log.go:172] (0xc000a943c0) (3) Data frame sent\nI0826 23:10:15.747844    1561 log.go:172] (0xc000ae4f20) Data frame received for 3\nI0826 23:10:15.747869    1561 log.go:172] (0xc000a943c0) (3) Data frame handling\nI0826 23:10:15.747892    1561 log.go:172] (0xc000a943c0) (3) Data frame sent\nI0826 23:10:15.748556    1561 log.go:172] (0xc000ae4f20) Data frame received for 5\nI0826 23:10:15.748639    1561 log.go:172] (0xc000aa8320) (5) Data frame handling\nI0826 23:10:15.748667    1561 log.go:172] (0xc000ae4f20) Data frame received for 3\nI0826 23:10:15.748679    1561 log.go:172] (0xc000a943c0) (3) Data frame handling\nI0826 23:10:15.750709    1561 log.go:172] (0xc000ae4f20) Data frame received for 1\nI0826 23:10:15.750740    1561 log.go:172] (0xc000a94320) (1) Data frame handling\nI0826 23:10:15.750755    1561 log.go:172] (0xc000a94320) (1) Data frame sent\nI0826 23:10:15.750781    1561 log.go:172] (0xc000ae4f20) (0xc000a94320) Stream removed, broadcasting: 1\nI0826 23:10:15.750817    1561 log.go:172] (0xc000ae4f20) Go away received\nI0826 23:10:15.751189    1561 log.go:172] (0xc000ae4f20) (0xc000a94320) Stream removed, broadcasting: 1\nI0826 23:10:15.751206    1561 log.go:172] (0xc000ae4f20) (0xc000a943c0) Stream removed, broadcasting: 3\nI0826 23:10:15.751213    1561 log.go:172] (0xc000ae4f20) (0xc000aa8320) Stream removed, broadcasting: 5\n"
Aug 26 23:10:15.762: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-5983.svc.cluster.local\tcanonical name = externalsvc.services-5983.svc.cluster.local.\nName:\texternalsvc.services-5983.svc.cluster.local\nAddress: 10.98.190.71\n\n"
STEP: deleting ReplicationController externalsvc in namespace services-5983, will wait for the garbage collector to delete the pods
Aug 26 23:10:15.822: INFO: Deleting ReplicationController externalsvc took: 6.5698ms
Aug 26 23:10:15.922: INFO: Terminating ReplicationController externalsvc pods took: 100.221955ms
Aug 26 23:10:31.778: INFO: Cleaning up the NodePort to ExternalName test service
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:10:31.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-5983" for this suite.
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:26.837 seconds]
[sig-network] Services
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from NodePort to ExternalName [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":278,"completed":104,"skipped":1568,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:10:31.815: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Update Demo
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:325
[It] should do a rolling update of a replication controller  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the initial replication controller
Aug 26 23:10:32.611: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1752'
Aug 26 23:10:33.290: INFO: stderr: ""
Aug 26 23:10:33.290: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug 26 23:10:33.291: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1752'
Aug 26 23:10:33.514: INFO: stderr: ""
Aug 26 23:10:33.514: INFO: stdout: "update-demo-nautilus-cmvw2 "
STEP: Replicas for name=update-demo: expected=2 actual=1
Aug 26 23:10:38.514: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1752'
Aug 26 23:10:38.623: INFO: stderr: ""
Aug 26 23:10:38.623: INFO: stdout: "update-demo-nautilus-cmvw2 update-demo-nautilus-rzb2p "
Aug 26 23:10:38.623: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cmvw2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1752'
Aug 26 23:10:38.712: INFO: stderr: ""
Aug 26 23:10:38.712: INFO: stdout: "true"
Aug 26 23:10:38.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cmvw2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1752'
Aug 26 23:10:38.840: INFO: stderr: ""
Aug 26 23:10:38.840: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 26 23:10:38.840: INFO: validating pod update-demo-nautilus-cmvw2
Aug 26 23:10:38.845: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 26 23:10:38.845: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 26 23:10:38.845: INFO: update-demo-nautilus-cmvw2 is verified up and running
Aug 26 23:10:38.845: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rzb2p -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1752'
Aug 26 23:10:38.937: INFO: stderr: ""
Aug 26 23:10:38.937: INFO: stdout: "true"
Aug 26 23:10:38.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rzb2p -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1752'
Aug 26 23:10:39.032: INFO: stderr: ""
Aug 26 23:10:39.032: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 26 23:10:39.032: INFO: validating pod update-demo-nautilus-rzb2p
Aug 26 23:10:39.036: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 26 23:10:39.036: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 26 23:10:39.036: INFO: update-demo-nautilus-rzb2p is verified up and running
STEP: rolling-update to new replication controller
Aug 26 23:10:39.037: INFO: scanned /root for discovery docs: 
Aug 26 23:10:39.037: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-1752'
Aug 26 23:11:01.717: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Aug 26 23:11:01.717: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug 26 23:11:01.717: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1752'
Aug 26 23:11:01.820: INFO: stderr: ""
Aug 26 23:11:01.820: INFO: stdout: "update-demo-kitten-8cwzp update-demo-kitten-bxtgx "
Aug 26 23:11:01.820: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-8cwzp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1752'
Aug 26 23:11:01.917: INFO: stderr: ""
Aug 26 23:11:01.917: INFO: stdout: "true"
Aug 26 23:11:01.917: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-8cwzp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1752'
Aug 26 23:11:02.008: INFO: stderr: ""
Aug 26 23:11:02.009: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Aug 26 23:11:02.009: INFO: validating pod update-demo-kitten-8cwzp
Aug 26 23:11:02.012: INFO: got data: {
  "image": "kitten.jpg"
}

Aug 26 23:11:02.012: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Aug 26 23:11:02.012: INFO: update-demo-kitten-8cwzp is verified up and running
Aug 26 23:11:02.012: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-bxtgx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1752'
Aug 26 23:11:02.113: INFO: stderr: ""
Aug 26 23:11:02.113: INFO: stdout: "true"
Aug 26 23:11:02.113: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-bxtgx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1752'
Aug 26 23:11:02.221: INFO: stderr: ""
Aug 26 23:11:02.221: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Aug 26 23:11:02.221: INFO: validating pod update-demo-kitten-bxtgx
Aug 26 23:11:02.225: INFO: got data: {
  "image": "kitten.jpg"
}

Aug 26 23:11:02.225: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Aug 26 23:11:02.225: INFO: update-demo-kitten-bxtgx is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:11:02.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1752" for this suite.

• [SLOW TEST:30.415 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:323
    should do a rolling update of a replication controller  [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller  [Conformance]","total":278,"completed":105,"skipped":1594,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:11:02.231: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Kubectl replace
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1796
[It] should update a single-container pod's image  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Aug 26 23:11:02.311: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-3241'
Aug 26 23:11:02.415: INFO: stderr: ""
Aug 26 23:11:02.415: INFO: stdout: "pod/e2e-test-httpd-pod created\n"
STEP: verifying the pod e2e-test-httpd-pod is running
STEP: verifying the pod e2e-test-httpd-pod was created
Aug 26 23:11:07.465: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-3241 -o json'
Aug 26 23:11:07.586: INFO: stderr: ""
Aug 26 23:11:07.586: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-08-26T23:11:02Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-httpd-pod\"\n        },\n        \"name\": \"e2e-test-httpd-pod\",\n        \"namespace\": \"kubectl-3241\",\n        \"resourceVersion\": \"4034939\",\n        \"selfLink\": \"/api/v1/namespaces/kubectl-3241/pods/e2e-test-httpd-pod\",\n        \"uid\": \"1170e5c2-70a6-472f-b5f9-36356506e659\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-httpd-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-gzckq\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"jerma-worker2\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-gzckq\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-gzckq\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-08-26T23:11:02Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-08-26T23:11:05Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-08-26T23:11:05Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-08-26T23:11:02Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"containerd://385727a3d737af58a8fe2503e5636d31d39eafaa4140fab66b8222f046db256b\",\n                \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n                \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-httpd-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"started\": true,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-08-26T23:11:05Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"172.18.0.3\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.244.1.61\",\n        \"podIPs\": [\n            {\n                \"ip\": \"10.244.1.61\"\n            }\n        ],\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-08-26T23:11:02Z\"\n    }\n}\n"
STEP: replace the image in the pod
Aug 26 23:11:07.586: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-3241'
Aug 26 23:11:07.862: INFO: stderr: ""
Aug 26 23:11:07.862: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n"
STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29
[AfterEach] Kubectl replace
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1801
Aug 26 23:11:07.876: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-3241'
Aug 26 23:11:21.681: INFO: stderr: ""
Aug 26 23:11:21.681: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:11:21.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3241" for this suite.

• [SLOW TEST:19.500 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl replace
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1792
    should update a single-container pod's image  [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image  [Conformance]","total":278,"completed":106,"skipped":1609,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:11:21.732: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 26 23:11:21.870: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fc6077d5-c4a8-4845-a26f-456ba41b9666" in namespace "projected-1600" to be "success or failure"
Aug 26 23:11:21.873: INFO: Pod "downwardapi-volume-fc6077d5-c4a8-4845-a26f-456ba41b9666": Phase="Pending", Reason="", readiness=false. Elapsed: 3.163656ms
Aug 26 23:11:24.011: INFO: Pod "downwardapi-volume-fc6077d5-c4a8-4845-a26f-456ba41b9666": Phase="Pending", Reason="", readiness=false. Elapsed: 2.141299044s
Aug 26 23:11:26.035: INFO: Pod "downwardapi-volume-fc6077d5-c4a8-4845-a26f-456ba41b9666": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.165481746s
STEP: Saw pod success
Aug 26 23:11:26.035: INFO: Pod "downwardapi-volume-fc6077d5-c4a8-4845-a26f-456ba41b9666" satisfied condition "success or failure"
Aug 26 23:11:26.039: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-fc6077d5-c4a8-4845-a26f-456ba41b9666 container client-container: 
STEP: delete the pod
Aug 26 23:11:26.235: INFO: Waiting for pod downwardapi-volume-fc6077d5-c4a8-4845-a26f-456ba41b9666 to disappear
Aug 26 23:11:26.275: INFO: Pod downwardapi-volume-fc6077d5-c4a8-4845-a26f-456ba41b9666 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:11:26.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1600" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":278,"completed":107,"skipped":1619,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:11:26.286: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W0826 23:11:57.006206       6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug 26 23:11:57.006: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:11:57.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-5808" for this suite.

• [SLOW TEST:30.727 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":278,"completed":108,"skipped":1628,"failed":0}
SSS
------------------------------
[sig-cli] Kubectl client Kubectl label 
  should update the label on a resource  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:11:57.013: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Kubectl label
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1276
STEP: creating the pod
Aug 26 23:11:57.124: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7494'
Aug 26 23:11:57.381: INFO: stderr: ""
Aug 26 23:11:57.381: INFO: stdout: "pod/pause created\n"
Aug 26 23:11:57.381: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Aug 26 23:11:57.381: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-7494" to be "running and ready"
Aug 26 23:11:57.406: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 24.614042ms
Aug 26 23:11:59.410: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028722305s
Aug 26 23:12:01.414: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.032914816s
Aug 26 23:12:01.414: INFO: Pod "pause" satisfied condition "running and ready"
Aug 26 23:12:01.414: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: adding the label testing-label with value testing-label-value to a pod
Aug 26 23:12:01.414: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-7494'
Aug 26 23:12:01.524: INFO: stderr: ""
Aug 26 23:12:01.524: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Aug 26 23:12:01.524: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-7494'
Aug 26 23:12:01.610: INFO: stderr: ""
Aug 26 23:12:01.610: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          4s    testing-label-value\n"
STEP: removing the label testing-label of a pod
Aug 26 23:12:01.611: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-7494'
Aug 26 23:12:01.717: INFO: stderr: ""
Aug 26 23:12:01.717: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Aug 26 23:12:01.717: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-7494'
Aug 26 23:12:01.817: INFO: stderr: ""
Aug 26 23:12:01.817: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          4s    \n"
[AfterEach] Kubectl label
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1283
STEP: using delete to clean up resources
Aug 26 23:12:01.817: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7494'
Aug 26 23:12:01.969: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 26 23:12:01.969: INFO: stdout: "pod \"pause\" force deleted\n"
Aug 26 23:12:01.969: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-7494'
Aug 26 23:12:02.199: INFO: stderr: "No resources found in kubectl-7494 namespace.\n"
Aug 26 23:12:02.199: INFO: stdout: ""
Aug 26 23:12:02.199: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-7494 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Aug 26 23:12:02.401: INFO: stderr: ""
Aug 26 23:12:02.401: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:12:02.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7494" for this suite.

• [SLOW TEST:5.552 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl label
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1273
    should update the label on a resource  [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource  [Conformance]","total":278,"completed":109,"skipped":1631,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny custom resource creation, update and deletion [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:12:02.566: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 26 23:12:05.183: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 26 23:12:07.197: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734080325, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734080325, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734080325, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734080325, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 26 23:12:10.230: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny custom resource creation, update and deletion [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 26 23:12:10.235: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the custom resource webhook via the AdmissionRegistration API
STEP: Creating a custom resource that should be denied by the webhook
STEP: Creating a custom resource whose deletion would be denied by the webhook
STEP: Updating the custom resource with disallowed data should be denied
STEP: Deleting the custom resource should be denied
STEP: Remove the offending key and value from the custom resource data
STEP: Deleting the updated custom resource should be successful
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:12:11.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7256" for this suite.
STEP: Destroying namespace "webhook-7256-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:8.958 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny custom resource creation, update and deletion [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":278,"completed":110,"skipped":1667,"failed":0}
SSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Subpath
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:12:11.525: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod pod-subpath-test-secret-8rg9
STEP: Creating a pod to test atomic-volume-subpath
Aug 26 23:12:11.589: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-8rg9" in namespace "subpath-5314" to be "success or failure"
Aug 26 23:12:11.594: INFO: Pod "pod-subpath-test-secret-8rg9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.519611ms
Aug 26 23:12:13.598: INFO: Pod "pod-subpath-test-secret-8rg9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008568062s
Aug 26 23:12:15.602: INFO: Pod "pod-subpath-test-secret-8rg9": Phase="Running", Reason="", readiness=true. Elapsed: 4.0129119s
Aug 26 23:12:17.606: INFO: Pod "pod-subpath-test-secret-8rg9": Phase="Running", Reason="", readiness=true. Elapsed: 6.017200357s
Aug 26 23:12:19.610: INFO: Pod "pod-subpath-test-secret-8rg9": Phase="Running", Reason="", readiness=true. Elapsed: 8.0209305s
Aug 26 23:12:21.614: INFO: Pod "pod-subpath-test-secret-8rg9": Phase="Running", Reason="", readiness=true. Elapsed: 10.024573665s
Aug 26 23:12:23.618: INFO: Pod "pod-subpath-test-secret-8rg9": Phase="Running", Reason="", readiness=true. Elapsed: 12.028773942s
Aug 26 23:12:25.623: INFO: Pod "pod-subpath-test-secret-8rg9": Phase="Running", Reason="", readiness=true. Elapsed: 14.033257948s
Aug 26 23:12:27.627: INFO: Pod "pod-subpath-test-secret-8rg9": Phase="Running", Reason="", readiness=true. Elapsed: 16.037673071s
Aug 26 23:12:29.631: INFO: Pod "pod-subpath-test-secret-8rg9": Phase="Running", Reason="", readiness=true. Elapsed: 18.041750933s
Aug 26 23:12:31.635: INFO: Pod "pod-subpath-test-secret-8rg9": Phase="Running", Reason="", readiness=true. Elapsed: 20.046062277s
Aug 26 23:12:33.639: INFO: Pod "pod-subpath-test-secret-8rg9": Phase="Running", Reason="", readiness=true. Elapsed: 22.049765623s
Aug 26 23:12:35.643: INFO: Pod "pod-subpath-test-secret-8rg9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.05388215s
STEP: Saw pod success
Aug 26 23:12:35.643: INFO: Pod "pod-subpath-test-secret-8rg9" satisfied condition "success or failure"
Aug 26 23:12:35.646: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-secret-8rg9 container test-container-subpath-secret-8rg9: 
STEP: delete the pod
Aug 26 23:12:35.669: INFO: Waiting for pod pod-subpath-test-secret-8rg9 to disappear
Aug 26 23:12:35.674: INFO: Pod pod-subpath-test-secret-8rg9 no longer exists
STEP: Deleting pod pod-subpath-test-secret-8rg9
Aug 26 23:12:35.674: INFO: Deleting pod "pod-subpath-test-secret-8rg9" in namespace "subpath-5314"
[AfterEach] [sig-storage] Subpath
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:12:35.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-5314" for this suite.

• [SLOW TEST:24.186 seconds]
[sig-storage] Subpath
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with secret pod [LinuxOnly] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":278,"completed":111,"skipped":1674,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:12:35.712: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 26 23:12:35.792: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b3c11c69-db39-4de1-9815-de0c0bc8e67b" in namespace "downward-api-2923" to be "success or failure"
Aug 26 23:12:35.795: INFO: Pod "downwardapi-volume-b3c11c69-db39-4de1-9815-de0c0bc8e67b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.134374ms
Aug 26 23:12:37.801: INFO: Pod "downwardapi-volume-b3c11c69-db39-4de1-9815-de0c0bc8e67b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009326031s
Aug 26 23:12:39.805: INFO: Pod "downwardapi-volume-b3c11c69-db39-4de1-9815-de0c0bc8e67b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013257643s
STEP: Saw pod success
Aug 26 23:12:39.805: INFO: Pod "downwardapi-volume-b3c11c69-db39-4de1-9815-de0c0bc8e67b" satisfied condition "success or failure"
Aug 26 23:12:39.808: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-b3c11c69-db39-4de1-9815-de0c0bc8e67b container client-container: 
STEP: delete the pod
Aug 26 23:12:39.830: INFO: Waiting for pod downwardapi-volume-b3c11c69-db39-4de1-9815-de0c0bc8e67b to disappear
Aug 26 23:12:39.834: INFO: Pod downwardapi-volume-b3c11c69-db39-4de1-9815-de0c0bc8e67b no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:12:39.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2923" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":278,"completed":112,"skipped":1705,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Servers with support for Table transformation 
  should return a 406 for a backend which does not implement metadata [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:12:39.841: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename tables
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:46
[It] should return a 406 for a backend which does not implement metadata [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [sig-api-machinery] Servers with support for Table transformation
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:12:39.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-8558" for this suite.
•{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":278,"completed":113,"skipped":1733,"failed":0}
SSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should create and stop a replication controller  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:12:39.921: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Update Demo
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:325
[It] should create and stop a replication controller  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a replication controller
Aug 26 23:12:39.969: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6092'
Aug 26 23:12:40.217: INFO: stderr: ""
Aug 26 23:12:40.217: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug 26 23:12:40.217: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6092'
Aug 26 23:12:40.308: INFO: stderr: ""
Aug 26 23:12:40.308: INFO: stdout: "update-demo-nautilus-klb5b update-demo-nautilus-phxlt "
Aug 26 23:12:40.309: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-klb5b -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6092'
Aug 26 23:12:40.397: INFO: stderr: ""
Aug 26 23:12:40.397: INFO: stdout: ""
Aug 26 23:12:40.397: INFO: update-demo-nautilus-klb5b is created but not running
Aug 26 23:12:45.397: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6092'
Aug 26 23:12:45.537: INFO: stderr: ""
Aug 26 23:12:45.537: INFO: stdout: "update-demo-nautilus-klb5b update-demo-nautilus-phxlt "
Aug 26 23:12:45.537: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-klb5b -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6092'
Aug 26 23:12:45.628: INFO: stderr: ""
Aug 26 23:12:45.628: INFO: stdout: "true"
Aug 26 23:12:45.628: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-klb5b -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6092'
Aug 26 23:12:45.735: INFO: stderr: ""
Aug 26 23:12:45.735: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 26 23:12:45.735: INFO: validating pod update-demo-nautilus-klb5b
Aug 26 23:12:45.740: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 26 23:12:45.740: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 26 23:12:45.740: INFO: update-demo-nautilus-klb5b is verified up and running
Aug 26 23:12:45.740: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-phxlt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6092'
Aug 26 23:12:45.827: INFO: stderr: ""
Aug 26 23:12:45.827: INFO: stdout: "true"
Aug 26 23:12:45.827: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-phxlt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6092'
Aug 26 23:12:45.931: INFO: stderr: ""
Aug 26 23:12:45.931: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 26 23:12:45.931: INFO: validating pod update-demo-nautilus-phxlt
Aug 26 23:12:45.935: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 26 23:12:45.935: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 26 23:12:45.935: INFO: update-demo-nautilus-phxlt is verified up and running
STEP: using delete to clean up resources
Aug 26 23:12:45.935: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6092'
Aug 26 23:12:46.027: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 26 23:12:46.027: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Aug 26 23:12:46.027: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-6092'
Aug 26 23:12:46.129: INFO: stderr: "No resources found in kubectl-6092 namespace.\n"
Aug 26 23:12:46.129: INFO: stdout: ""
Aug 26 23:12:46.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-6092 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Aug 26 23:12:46.234: INFO: stderr: ""
Aug 26 23:12:46.234: INFO: stdout: "update-demo-nautilus-klb5b\nupdate-demo-nautilus-phxlt\n"
Aug 26 23:12:46.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-6092'
Aug 26 23:12:46.851: INFO: stderr: "No resources found in kubectl-6092 namespace.\n"
Aug 26 23:12:46.851: INFO: stdout: ""
Aug 26 23:12:46.851: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-6092 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Aug 26 23:12:46.944: INFO: stderr: ""
Aug 26 23:12:46.944: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:12:46.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6092" for this suite.

• [SLOW TEST:7.030 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:323
    should create and stop a replication controller  [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","total":278,"completed":114,"skipped":1738,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:12:46.952: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 26 23:12:48.047: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 26 23:12:50.057: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734080368, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734080368, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734080368, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734080368, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 26 23:12:53.130: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API
STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API
STEP: Creating a dummy validating-webhook-configuration object
STEP: Deleting the validating-webhook-configuration, which should be possible to remove
STEP: Creating a dummy mutating-webhook-configuration object
STEP: Deleting the mutating-webhook-configuration, which should be possible to remove
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:12:53.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7782" for this suite.
STEP: Destroying namespace "webhook-7782-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:6.395 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":278,"completed":115,"skipped":1772,"failed":0}
S
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:12:53.347: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
Aug 26 23:12:53.385: INFO: PodSpec: initContainers in spec.initContainers
Aug 26 23:13:49.135: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-0fd38726-4d15-4b3e-bf0f-84c6536f56cd", GenerateName:"", Namespace:"init-container-6161", SelfLink:"/api/v1/namespaces/init-container-6161/pods/pod-init-0fd38726-4d15-4b3e-bf0f-84c6536f56cd", UID:"99b5fe35-031f-4969-906e-d9ed6bac4c76", ResourceVersion:"4035865", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63734080373, loc:(*time.Location)(0x7931640)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"385249870"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-7sbsb", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc002efc000), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-7sbsb", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-7sbsb", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-7sbsb", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0031b0068), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"jerma-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0023fca20), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0031b0100)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0031b0120)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0031b0128), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0031b012c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734080373, loc:(*time.Location)(0x7931640)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734080373, loc:(*time.Location)(0x7931640)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734080373, loc:(*time.Location)(0x7931640)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734080373, loc:(*time.Location)(0x7931640)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.6", PodIP:"10.244.2.229", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.2.229"}}, StartTime:(*v1.Time)(0xc0035d8040), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc0035d8080), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002e14070)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://6017754db1404f19ed1cd710c293ddb52630151fde495c6e760ca0c1f0273f11", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0035d80c0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0035d8060), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc0031b01bf)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:13:49.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-6161" for this suite.

• [SLOW TEST:55.801 seconds]
[k8s.io] InitContainer [NodeConformance]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":278,"completed":116,"skipped":1773,"failed":0}
SSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD with validation schema [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:13:49.148: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD with validation schema [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 26 23:13:49.490: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with known and required properties
Aug 26 23:13:52.482: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9576 create -f -'
Aug 26 23:13:59.363: INFO: stderr: ""
Aug 26 23:13:59.363: INFO: stdout: "e2e-test-crd-publish-openapi-8878-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
Aug 26 23:13:59.363: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9576 delete e2e-test-crd-publish-openapi-8878-crds test-foo'
Aug 26 23:13:59.460: INFO: stderr: ""
Aug 26 23:13:59.460: INFO: stdout: "e2e-test-crd-publish-openapi-8878-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
Aug 26 23:13:59.460: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9576 apply -f -'
Aug 26 23:13:59.744: INFO: stderr: ""
Aug 26 23:13:59.744: INFO: stdout: "e2e-test-crd-publish-openapi-8878-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
Aug 26 23:13:59.744: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9576 delete e2e-test-crd-publish-openapi-8878-crds test-foo'
Aug 26 23:13:59.857: INFO: stderr: ""
Aug 26 23:13:59.857: INFO: stdout: "e2e-test-crd-publish-openapi-8878-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema
Aug 26 23:13:59.857: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9576 create -f -'
Aug 26 23:14:00.091: INFO: rc: 1
Aug 26 23:14:00.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9576 apply -f -'
Aug 26 23:14:00.471: INFO: rc: 1
STEP: client-side validation (kubectl create and apply) rejects request without required properties
Aug 26 23:14:00.471: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9576 create -f -'
Aug 26 23:14:00.734: INFO: rc: 1
Aug 26 23:14:00.734: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9576 apply -f -'
Aug 26 23:14:00.973: INFO: rc: 1
STEP: kubectl explain works to explain CR properties
Aug 26 23:14:00.973: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8878-crds'
Aug 26 23:14:01.194: INFO: stderr: ""
Aug 26 23:14:01.194: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-8878-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n     Foo CRD for Testing\n\nFIELDS:\n   apiVersion\t\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t\n     Specification of Foo\n\n   status\t\n     Status of Foo\n\n"
STEP: kubectl explain works to explain CR properties recursively
Aug 26 23:14:01.195: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8878-crds.metadata'
Aug 26 23:14:01.518: INFO: stderr: ""
Aug 26 23:14:01.518: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-8878-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n     ObjectMeta is metadata that all persisted resources must have, which\n     includes all objects users must create.\n\nFIELDS:\n   annotations\t\n     Annotations is an unstructured key value map stored with a resource that\n     may be set by external tools to store and retrieve arbitrary metadata. They\n     are not queryable and should be preserved when modifying objects. More\n     info: http://kubernetes.io/docs/user-guide/annotations\n\n   clusterName\t\n     The name of the cluster which the object belongs to. This is used to\n     distinguish resources with same name and namespace in different clusters.\n     This field is not set anywhere right now and apiserver is going to ignore\n     it if set in create or update request.\n\n   creationTimestamp\t\n     CreationTimestamp is a timestamp representing the server time when this\n     object was created. It is not guaranteed to be set in happens-before order\n     across separate operations. Clients may not set this value. It is\n     represented in RFC3339 form and is in UTC. Populated by the system.\n     Read-only. Null for lists. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   deletionGracePeriodSeconds\t\n     Number of seconds allowed for this object to gracefully terminate before it\n     will be removed from the system. Only set when deletionTimestamp is also\n     set. May only be shortened. Read-only.\n\n   deletionTimestamp\t\n     DeletionTimestamp is RFC 3339 date and time at which this resource will be\n     deleted. This field is set by the server when a graceful deletion is\n     requested by the user, and is not directly settable by a client. The\n     resource is expected to be deleted (no longer visible from resource lists,\n     and not reachable by name) after the time in this field, once the\n     finalizers list is empty. As long as the finalizers list contains items,\n     deletion is blocked. Once the deletionTimestamp is set, this value may not\n     be unset or be set further into the future, although it may be shortened or\n     the resource may be deleted prior to this time. For example, a user may\n     request that a pod is deleted in 30 seconds. The Kubelet will react by\n     sending a graceful termination signal to the containers in the pod. After\n     that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n     to the container and after cleanup, remove the pod from the API. In the\n     presence of network partitions, this object may still exist after this\n     timestamp, until an administrator or automated process can determine the\n     resource is fully terminated. If not set, graceful deletion of the object\n     has not been requested. Populated by the system when a graceful deletion is\n     requested. Read-only. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   finalizers\t<[]string>\n     Must be empty before the object is deleted from the registry. Each entry is\n     an identifier for the responsible component that will remove the entry from\n     the list. If the deletionTimestamp of the object is non-nil, entries in\n     this list can only be removed. Finalizers may be processed and removed in\n     any order. Order is NOT enforced because it introduces significant risk of\n     stuck finalizers. finalizers is a shared field, any actor with permission\n     can reorder it. If the finalizer list is processed in order, then this can\n     lead to a situation in which the component responsible for the first\n     finalizer in the list is waiting for a signal (field value, external\n     system, or other) produced by a component responsible for a finalizer later\n     in the list, resulting in a deadlock. Without enforced ordering finalizers\n     are free to order amongst themselves and are not vulnerable to ordering\n     changes in the list.\n\n   generateName\t\n     GenerateName is an optional prefix, used by the server, to generate a\n     unique name ONLY IF the Name field has not been provided. If this field is\n     used, the name returned to the client will be different than the name\n     passed. This value will also be combined with a unique suffix. The provided\n     value has the same validation rules as the Name field, and may be truncated\n     by the length of the suffix required to make the value unique on the\n     server. If this field is specified and the generated name exists, the\n     server will NOT return a 409 - instead, it will either return 201 Created\n     or 500 with Reason ServerTimeout indicating a unique name could not be\n     found in the time allotted, and the client should retry (optionally after\n     the time indicated in the Retry-After header). Applied only if Name is not\n     specified. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n   generation\t\n     A sequence number representing a specific generation of the desired state.\n     Populated by the system. Read-only.\n\n   labels\t\n     Map of string keys and values that can be used to organize and categorize\n     (scope and select) objects. May match selectors of replication controllers\n     and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n   managedFields\t<[]Object>\n     ManagedFields maps workflow-id and version to the set of fields that are\n     managed by that workflow. This is mostly for internal housekeeping, and\n     users typically shouldn't need to set or understand this field. A workflow\n     can be the user's name, a controller's name, or the name of a specific\n     apply path like \"ci-cd\". The set of fields is always in the version that\n     the workflow used when modifying the object.\n\n   name\t\n     Name must be unique within a namespace. Is required when creating\n     resources, although some resources may allow a client to request the\n     generation of an appropriate name automatically. Name is primarily intended\n     for creation idempotence and configuration definition. Cannot be updated.\n     More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n   namespace\t\n     Namespace defines the space within each name must be unique. An empty\n     namespace is equivalent to the \"default\" namespace, but \"default\" is the\n     canonical representation. Not all objects are required to be scoped to a\n     namespace - the value of this field for those objects will be empty. Must\n     be a DNS_LABEL. Cannot be updated. More info:\n     http://kubernetes.io/docs/user-guide/namespaces\n\n   ownerReferences\t<[]Object>\n     List of objects depended by this object. If ALL objects in the list have\n     been deleted, this object will be garbage collected. If this object is\n     managed by a controller, then an entry in this list will point to this\n     controller, with the controller field set to true. There cannot be more\n     than one managing controller.\n\n   resourceVersion\t\n     An opaque value that represents the internal version of this object that\n     can be used by clients to determine when objects have changed. May be used\n     for optimistic concurrency, change detection, and the watch operation on a\n     resource or set of resources. Clients must treat these values as opaque and\n     passed unmodified back to the server. They may only be valid for a\n     particular resource or set of resources. Populated by the system.\n     Read-only. Value must be treated as opaque by clients and . More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n   selfLink\t\n     SelfLink is a URL representing this object. Populated by the system.\n     Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n     release and the field is planned to be removed in 1.21 release.\n\n   uid\t\n     UID is the unique in time and space value for this object. It is typically\n     generated by the server on successful creation of a resource and is not\n     allowed to change on PUT operations. Populated by the system. Read-only.\n     More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n"
Aug 26 23:14:01.519: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8878-crds.spec'
Aug 26 23:14:01.790: INFO: stderr: ""
Aug 26 23:14:01.790: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-8878-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n     Specification of Foo\n\nFIELDS:\n   bars\t<[]Object>\n     List of Bars and their specs.\n\n"
Aug 26 23:14:01.791: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8878-crds.spec.bars'
Aug 26 23:14:02.031: INFO: stderr: ""
Aug 26 23:14:02.031: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-8878-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n     List of Bars and their specs.\n\nFIELDS:\n   age\t\n     Age of Bar.\n\n   bazs\t<[]string>\n     List of Bazs.\n\n   name\t -required-\n     Name of Bar.\n\n"
STEP: kubectl explain works to return error when explain is called on property that doesn't exist
Aug 26 23:14:02.032: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8878-crds.spec.bars2'
Aug 26 23:14:02.268: INFO: rc: 1
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:14:04.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-9576" for this suite.

• [SLOW TEST:14.977 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD with validation schema [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":278,"completed":117,"skipped":1777,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Subpath
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:14:04.126: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod pod-subpath-test-projected-znpg
STEP: Creating a pod to test atomic-volume-subpath
Aug 26 23:14:04.237: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-znpg" in namespace "subpath-264" to be "success or failure"
Aug 26 23:14:04.274: INFO: Pod "pod-subpath-test-projected-znpg": Phase="Pending", Reason="", readiness=false. Elapsed: 36.886372ms
Aug 26 23:14:06.304: INFO: Pod "pod-subpath-test-projected-znpg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066097857s
Aug 26 23:14:08.312: INFO: Pod "pod-subpath-test-projected-znpg": Phase="Running", Reason="", readiness=true. Elapsed: 4.074366124s
Aug 26 23:14:10.316: INFO: Pod "pod-subpath-test-projected-znpg": Phase="Running", Reason="", readiness=true. Elapsed: 6.0785808s
Aug 26 23:14:12.320: INFO: Pod "pod-subpath-test-projected-znpg": Phase="Running", Reason="", readiness=true. Elapsed: 8.082494213s
Aug 26 23:14:14.408: INFO: Pod "pod-subpath-test-projected-znpg": Phase="Running", Reason="", readiness=true. Elapsed: 10.17042823s
Aug 26 23:14:16.412: INFO: Pod "pod-subpath-test-projected-znpg": Phase="Running", Reason="", readiness=true. Elapsed: 12.174935297s
Aug 26 23:14:18.416: INFO: Pod "pod-subpath-test-projected-znpg": Phase="Running", Reason="", readiness=true. Elapsed: 14.178642684s
Aug 26 23:14:20.420: INFO: Pod "pod-subpath-test-projected-znpg": Phase="Running", Reason="", readiness=true. Elapsed: 16.182169759s
Aug 26 23:14:22.424: INFO: Pod "pod-subpath-test-projected-znpg": Phase="Running", Reason="", readiness=true. Elapsed: 18.18607948s
Aug 26 23:14:24.427: INFO: Pod "pod-subpath-test-projected-znpg": Phase="Running", Reason="", readiness=true. Elapsed: 20.189773017s
Aug 26 23:14:26.433: INFO: Pod "pod-subpath-test-projected-znpg": Phase="Running", Reason="", readiness=true. Elapsed: 22.195552019s
Aug 26 23:14:28.480: INFO: Pod "pod-subpath-test-projected-znpg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.242665954s
STEP: Saw pod success
Aug 26 23:14:28.480: INFO: Pod "pod-subpath-test-projected-znpg" satisfied condition "success or failure"
Aug 26 23:14:28.484: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-projected-znpg container test-container-subpath-projected-znpg: 
STEP: delete the pod
Aug 26 23:14:28.547: INFO: Waiting for pod pod-subpath-test-projected-znpg to disappear
Aug 26 23:14:28.683: INFO: Pod pod-subpath-test-projected-znpg no longer exists
STEP: Deleting pod pod-subpath-test-projected-znpg
Aug 26 23:14:28.683: INFO: Deleting pod "pod-subpath-test-projected-znpg" in namespace "subpath-264"
[AfterEach] [sig-storage] Subpath
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:14:28.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-264" for this suite.

• [SLOW TEST:24.569 seconds]
[sig-storage] Subpath
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with projected pod [LinuxOnly] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":278,"completed":118,"skipped":1796,"failed":0}
SSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:14:28.696: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 26 23:14:30.270: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 26 23:14:32.281: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734080470, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734080470, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734080470, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734080470, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 23:14:34.306: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734080470, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734080470, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734080470, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734080470, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 26 23:14:37.318: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 26 23:14:37.330: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-4983-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource that should be mutated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:14:38.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-1063" for this suite.
STEP: Destroying namespace "webhook-1063-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:9.878 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":278,"completed":119,"skipped":1804,"failed":0}
SSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicationController
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:14:38.574: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Aug 26 23:14:38.701: INFO: Pod name pod-release: Found 0 pods out of 1
Aug 26 23:14:43.704: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:14:44.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-769" for this suite.

• [SLOW TEST:6.147 seconds]
[sig-apps] ReplicationController
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should release no longer matching pods [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":278,"completed":120,"skipped":1812,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context when creating containers with AllowPrivilegeEscalation 
  should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Security Context
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:14:44.722: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39
[It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 26 23:14:45.642: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-2157710d-c28b-412f-b71a-36fb20a8f809" in namespace "security-context-test-2957" to be "success or failure"
Aug 26 23:14:46.295: INFO: Pod "alpine-nnp-false-2157710d-c28b-412f-b71a-36fb20a8f809": Phase="Pending", Reason="", readiness=false. Elapsed: 652.879174ms
Aug 26 23:14:48.516: INFO: Pod "alpine-nnp-false-2157710d-c28b-412f-b71a-36fb20a8f809": Phase="Pending", Reason="", readiness=false. Elapsed: 2.874352509s
Aug 26 23:14:50.600: INFO: Pod "alpine-nnp-false-2157710d-c28b-412f-b71a-36fb20a8f809": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.957543439s
Aug 26 23:14:50.600: INFO: Pod "alpine-nnp-false-2157710d-c28b-412f-b71a-36fb20a8f809" satisfied condition "success or failure"
[AfterEach] [k8s.io] Security Context
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:14:50.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-2957" for this suite.

• [SLOW TEST:6.256 seconds]
[k8s.io] Security Context
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when creating containers with AllowPrivilegeEscalation
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:289
    should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":121,"skipped":1846,"failed":0}
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing validating webhooks should work [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:14:50.978: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 26 23:14:51.968: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 26 23:14:53.982: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734080491, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734080491, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734080492, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734080491, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 26 23:14:57.078: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] listing validating webhooks should work [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Listing all of the created validation webhooks
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Deleting the collection of validation webhooks
STEP: Creating a configMap that does not comply to the validation webhook rules
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:14:57.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-2759" for this suite.
STEP: Destroying namespace "webhook-2759-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:6.724 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing validating webhooks should work [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":278,"completed":122,"skipped":1846,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:14:57.702: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:15:57.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-2401" for this suite.

• [SLOW TEST:60.126 seconds]
[k8s.io] Probing container
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":278,"completed":123,"skipped":1873,"failed":0}
SS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:15:57.829: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Aug 26 23:16:05.975: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Aug 26 23:16:05.990: INFO: Pod pod-with-prestop-http-hook still exists
Aug 26 23:16:07.991: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Aug 26 23:16:07.994: INFO: Pod pod-with-prestop-http-hook still exists
Aug 26 23:16:09.991: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Aug 26 23:16:09.995: INFO: Pod pod-with-prestop-http-hook still exists
Aug 26 23:16:11.991: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Aug 26 23:16:11.994: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:16:12.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-1369" for this suite.

• [SLOW TEST:14.186 seconds]
[k8s.io] Container Lifecycle Hook
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when create a pod with lifecycle hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":278,"completed":124,"skipped":1875,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:16:12.015: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Aug 26 23:16:16.098: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:16:16.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-4772" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":278,"completed":125,"skipped":1917,"failed":0}
S
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:16:16.194: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:16:22.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-7835" for this suite.

• [SLOW TEST:6.150 seconds]
[k8s.io] Kubelet
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when scheduling a busybox command in a pod
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":278,"completed":126,"skipped":1918,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:16:22.345: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-03d070a0-bdec-499e-9a79-32c50a192bf1
STEP: Creating a pod to test consume secrets
Aug 26 23:16:22.466: INFO: Waiting up to 5m0s for pod "pod-secrets-c4dfb65d-744c-4e3f-94fc-97d597b2c297" in namespace "secrets-8801" to be "success or failure"
Aug 26 23:16:22.469: INFO: Pod "pod-secrets-c4dfb65d-744c-4e3f-94fc-97d597b2c297": Phase="Pending", Reason="", readiness=false. Elapsed: 3.366209ms
Aug 26 23:16:24.494: INFO: Pod "pod-secrets-c4dfb65d-744c-4e3f-94fc-97d597b2c297": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028535214s
Aug 26 23:16:26.498: INFO: Pod "pod-secrets-c4dfb65d-744c-4e3f-94fc-97d597b2c297": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032620074s
Aug 26 23:16:28.505: INFO: Pod "pod-secrets-c4dfb65d-744c-4e3f-94fc-97d597b2c297": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.03904779s
STEP: Saw pod success
Aug 26 23:16:28.505: INFO: Pod "pod-secrets-c4dfb65d-744c-4e3f-94fc-97d597b2c297" satisfied condition "success or failure"
Aug 26 23:16:28.508: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-c4dfb65d-744c-4e3f-94fc-97d597b2c297 container secret-volume-test: 
STEP: delete the pod
Aug 26 23:16:28.548: INFO: Waiting for pod pod-secrets-c4dfb65d-744c-4e3f-94fc-97d597b2c297 to disappear
Aug 26 23:16:28.559: INFO: Pod pod-secrets-c4dfb65d-744c-4e3f-94fc-97d597b2c297 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:16:28.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8801" for this suite.

• [SLOW TEST:6.221 seconds]
[sig-storage] Secrets
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":127,"skipped":1942,"failed":0}
SSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Networking
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:16:28.566: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Performing setup for networking test in namespace pod-network-test-6224
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Aug 26 23:16:28.639: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Aug 26 23:16:56.790: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.237 8081 | grep -v '^\s*$'] Namespace:pod-network-test-6224 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 26 23:16:56.790: INFO: >>> kubeConfig: /root/.kube/config
I0826 23:16:56.827273       6 log.go:172] (0xc002477ce0) (0xc001f035e0) Create stream
I0826 23:16:56.827312       6 log.go:172] (0xc002477ce0) (0xc001f035e0) Stream added, broadcasting: 1
I0826 23:16:56.829709       6 log.go:172] (0xc002477ce0) Reply frame received for 1
I0826 23:16:56.829750       6 log.go:172] (0xc002477ce0) (0xc000b650e0) Create stream
I0826 23:16:56.829764       6 log.go:172] (0xc002477ce0) (0xc000b650e0) Stream added, broadcasting: 3
I0826 23:16:56.831666       6 log.go:172] (0xc002477ce0) Reply frame received for 3
I0826 23:16:56.831711       6 log.go:172] (0xc002477ce0) (0xc001f03680) Create stream
I0826 23:16:56.831732       6 log.go:172] (0xc002477ce0) (0xc001f03680) Stream added, broadcasting: 5
I0826 23:16:56.832903       6 log.go:172] (0xc002477ce0) Reply frame received for 5
I0826 23:16:57.891399       6 log.go:172] (0xc002477ce0) Data frame received for 5
I0826 23:16:57.891446       6 log.go:172] (0xc001f03680) (5) Data frame handling
I0826 23:16:57.891472       6 log.go:172] (0xc002477ce0) Data frame received for 3
I0826 23:16:57.891490       6 log.go:172] (0xc000b650e0) (3) Data frame handling
I0826 23:16:57.891504       6 log.go:172] (0xc000b650e0) (3) Data frame sent
I0826 23:16:57.891515       6 log.go:172] (0xc002477ce0) Data frame received for 3
I0826 23:16:57.891530       6 log.go:172] (0xc000b650e0) (3) Data frame handling
I0826 23:16:57.893499       6 log.go:172] (0xc002477ce0) Data frame received for 1
I0826 23:16:57.893533       6 log.go:172] (0xc001f035e0) (1) Data frame handling
I0826 23:16:57.893545       6 log.go:172] (0xc001f035e0) (1) Data frame sent
I0826 23:16:57.893563       6 log.go:172] (0xc002477ce0) (0xc001f035e0) Stream removed, broadcasting: 1
I0826 23:16:57.893604       6 log.go:172] (0xc002477ce0) Go away received
I0826 23:16:57.893692       6 log.go:172] (0xc002477ce0) (0xc001f035e0) Stream removed, broadcasting: 1
I0826 23:16:57.893721       6 log.go:172] (0xc002477ce0) (0xc000b650e0) Stream removed, broadcasting: 3
I0826 23:16:57.893736       6 log.go:172] (0xc002477ce0) (0xc001f03680) Stream removed, broadcasting: 5
Aug 26 23:16:57.893: INFO: Found all expected endpoints: [netserver-0]
Aug 26 23:16:57.897: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.71 8081 | grep -v '^\s*$'] Namespace:pod-network-test-6224 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 26 23:16:57.897: INFO: >>> kubeConfig: /root/.kube/config
I0826 23:16:57.923498       6 log.go:172] (0xc002eee160) (0xc0010bf400) Create stream
I0826 23:16:57.923528       6 log.go:172] (0xc002eee160) (0xc0010bf400) Stream added, broadcasting: 1
I0826 23:16:57.925303       6 log.go:172] (0xc002eee160) Reply frame received for 1
I0826 23:16:57.925343       6 log.go:172] (0xc002eee160) (0xc002c68000) Create stream
I0826 23:16:57.925357       6 log.go:172] (0xc002eee160) (0xc002c68000) Stream added, broadcasting: 3
I0826 23:16:57.926315       6 log.go:172] (0xc002eee160) Reply frame received for 3
I0826 23:16:57.926369       6 log.go:172] (0xc002eee160) (0xc0010bf9a0) Create stream
I0826 23:16:57.926390       6 log.go:172] (0xc002eee160) (0xc0010bf9a0) Stream added, broadcasting: 5
I0826 23:16:57.927473       6 log.go:172] (0xc002eee160) Reply frame received for 5
I0826 23:16:58.987047       6 log.go:172] (0xc002eee160) Data frame received for 3
I0826 23:16:58.987069       6 log.go:172] (0xc002c68000) (3) Data frame handling
I0826 23:16:58.987082       6 log.go:172] (0xc002c68000) (3) Data frame sent
I0826 23:16:58.987268       6 log.go:172] (0xc002eee160) Data frame received for 5
I0826 23:16:58.987314       6 log.go:172] (0xc0010bf9a0) (5) Data frame handling
I0826 23:16:58.987405       6 log.go:172] (0xc002eee160) Data frame received for 3
I0826 23:16:58.987435       6 log.go:172] (0xc002c68000) (3) Data frame handling
I0826 23:16:58.989289       6 log.go:172] (0xc002eee160) Data frame received for 1
I0826 23:16:58.989332       6 log.go:172] (0xc0010bf400) (1) Data frame handling
I0826 23:16:58.989381       6 log.go:172] (0xc0010bf400) (1) Data frame sent
I0826 23:16:58.989402       6 log.go:172] (0xc002eee160) (0xc0010bf400) Stream removed, broadcasting: 1
I0826 23:16:58.989469       6 log.go:172] (0xc002eee160) Go away received
I0826 23:16:58.989524       6 log.go:172] (0xc002eee160) (0xc0010bf400) Stream removed, broadcasting: 1
I0826 23:16:58.989553       6 log.go:172] (0xc002eee160) (0xc002c68000) Stream removed, broadcasting: 3
I0826 23:16:58.989568       6 log.go:172] (0xc002eee160) (0xc0010bf9a0) Stream removed, broadcasting: 5
Aug 26 23:16:58.989: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:16:58.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-6224" for this suite.

• [SLOW TEST:30.431 seconds]
[sig-network] Networking
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":128,"skipped":1951,"failed":0}
SSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:16:58.997: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Aug 26 23:16:59.096: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Aug 26 23:16:59.105: INFO: Waiting for terminating namespaces to be deleted...
Aug 26 23:16:59.108: INFO: 
Logging pods the kubelet thinks is on node jerma-worker before test
Aug 26 23:16:59.113: INFO: daemon-set-4l8wc from daemonsets-9371 started at 2020-08-20 20:09:22 +0000 UTC (1 container statuses recorded)
Aug 26 23:16:59.113: INFO: 	Container app ready: true, restart count 0
Aug 26 23:16:59.113: INFO: kube-proxy-lgd85 from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 26 23:16:59.113: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 26 23:16:59.113: INFO: kindnet-tfrcx from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 26 23:16:59.113: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 26 23:16:59.113: INFO: netserver-0 from pod-network-test-6224 started at 2020-08-26 23:16:28 +0000 UTC (1 container statuses recorded)
Aug 26 23:16:59.113: INFO: 	Container webserver ready: true, restart count 0
Aug 26 23:16:59.113: INFO: test-container-pod from pod-network-test-6224 started at 2020-08-26 23:16:50 +0000 UTC (1 container statuses recorded)
Aug 26 23:16:59.113: INFO: 	Container webserver ready: true, restart count 0
Aug 26 23:16:59.113: INFO: host-test-container-pod from pod-network-test-6224 started at 2020-08-26 23:16:50 +0000 UTC (1 container statuses recorded)
Aug 26 23:16:59.113: INFO: 	Container agnhost ready: true, restart count 0
Aug 26 23:16:59.113: INFO: 
Logging pods the kubelet thinks is on node jerma-worker2 before test
Aug 26 23:16:59.119: INFO: kindnet-gxck9 from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 26 23:16:59.119: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 26 23:16:59.119: INFO: daemon-set-cxv46 from daemonsets-9371 started at 2020-08-20 20:09:22 +0000 UTC (1 container statuses recorded)
Aug 26 23:16:59.119: INFO: 	Container app ready: true, restart count 0
Aug 26 23:16:59.119: INFO: busybox-scheduling-39ebf8e4-add4-4474-8d99-78f69b2c7283 from kubelet-test-7835 started at 2020-08-26 23:16:16 +0000 UTC (1 container statuses recorded)
Aug 26 23:16:59.119: INFO: 	Container busybox-scheduling-39ebf8e4-add4-4474-8d99-78f69b2c7283 ready: false, restart count 0
Aug 26 23:16:59.119: INFO: netserver-1 from pod-network-test-6224 started at 2020-08-26 23:16:28 +0000 UTC (1 container statuses recorded)
Aug 26 23:16:59.119: INFO: 	Container webserver ready: true, restart count 0
Aug 26 23:16:59.119: INFO: kube-proxy-ckhpn from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 26 23:16:59.119: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 26 23:16:59.119: INFO: test-recreate-deployment-5f94c574ff-k4dkm from deployment-5601 started at 2020-08-23 04:50:56 +0000 UTC (1 container statuses recorded)
Aug 26 23:16:59.119: INFO: 	Container httpd ready: true, restart count 0
[It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-4caad91e-f7e2-4581-b00f-bb8a5a392eb7 90
STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled
STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled
STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides
STEP: removing the label kubernetes.io/e2e-4caad91e-f7e2-4581-b00f-bb8a5a392eb7 off the node jerma-worker2
STEP: verifying the node doesn't have the label kubernetes.io/e2e-4caad91e-f7e2-4581-b00f-bb8a5a392eb7
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:17:19.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-8791" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77

• [SLOW TEST:20.546 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":278,"completed":129,"skipped":1961,"failed":0}
S
------------------------------
[sig-cli] Kubectl client Kubectl run --rm job 
  should create a job from an image, then delete the job [Deprecated] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:17:19.544: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should create a job from an image, then delete the job [Deprecated] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: executing a command with run --rm and attach with stdin
Aug 26 23:17:19.586: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-914 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Aug 26 23:17:23.106: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0826 23:17:23.039477    2659 log.go:172] (0xc000a64580) (0xc00071dae0) Create stream\nI0826 23:17:23.039559    2659 log.go:172] (0xc000a64580) (0xc00071dae0) Stream added, broadcasting: 1\nI0826 23:17:23.043050    2659 log.go:172] (0xc000a64580) Reply frame received for 1\nI0826 23:17:23.043090    2659 log.go:172] (0xc000a64580) (0xc0006d60a0) Create stream\nI0826 23:17:23.043105    2659 log.go:172] (0xc000a64580) (0xc0006d60a0) Stream added, broadcasting: 3\nI0826 23:17:23.043943    2659 log.go:172] (0xc000a64580) Reply frame received for 3\nI0826 23:17:23.043973    2659 log.go:172] (0xc000a64580) (0xc00071db80) Create stream\nI0826 23:17:23.043986    2659 log.go:172] (0xc000a64580) (0xc00071db80) Stream added, broadcasting: 5\nI0826 23:17:23.044897    2659 log.go:172] (0xc000a64580) Reply frame received for 5\nI0826 23:17:23.044954    2659 log.go:172] (0xc000a64580) (0xc00071dc20) Create stream\nI0826 23:17:23.044973    2659 log.go:172] (0xc000a64580) (0xc00071dc20) Stream added, broadcasting: 7\nI0826 23:17:23.045849    2659 log.go:172] (0xc000a64580) Reply frame received for 7\nI0826 23:17:23.046008    2659 log.go:172] (0xc0006d60a0) (3) Writing data frame\nI0826 23:17:23.046130    2659 log.go:172] (0xc0006d60a0) (3) Writing data frame\nI0826 23:17:23.046885    2659 log.go:172] (0xc000a64580) Data frame received for 5\nI0826 23:17:23.046896    2659 log.go:172] (0xc00071db80) (5) Data frame handling\nI0826 23:17:23.046905    2659 log.go:172] (0xc00071db80) (5) Data frame sent\nI0826 23:17:23.047446    2659 log.go:172] (0xc000a64580) Data frame received for 5\nI0826 23:17:23.047457    2659 log.go:172] (0xc00071db80) (5) Data frame handling\nI0826 23:17:23.047466    2659 log.go:172] (0xc00071db80) (5) Data frame sent\nI0826 23:17:23.075576    2659 log.go:172] (0xc000a64580) Data frame received for 5\nI0826 23:17:23.075612    2659 log.go:172] (0xc00071db80) (5) Data frame handling\nI0826 23:17:23.075634    2659 log.go:172] (0xc000a64580) Data frame received for 7\nI0826 23:17:23.075644    2659 log.go:172] (0xc00071dc20) (7) Data frame handling\nI0826 23:17:23.076003    2659 log.go:172] (0xc000a64580) Data frame received for 1\nI0826 23:17:23.076028    2659 log.go:172] (0xc00071dae0) (1) Data frame handling\nI0826 23:17:23.076058    2659 log.go:172] (0xc00071dae0) (1) Data frame sent\nI0826 23:17:23.076157    2659 log.go:172] (0xc000a64580) (0xc00071dae0) Stream removed, broadcasting: 1\nI0826 23:17:23.076321    2659 log.go:172] (0xc000a64580) (0xc0006d60a0) Stream removed, broadcasting: 3\nI0826 23:17:23.076377    2659 log.go:172] (0xc000a64580) Go away received\nI0826 23:17:23.076611    2659 log.go:172] (0xc000a64580) (0xc00071dae0) Stream removed, broadcasting: 1\nI0826 23:17:23.076632    2659 log.go:172] (0xc000a64580) (0xc0006d60a0) Stream removed, broadcasting: 3\nI0826 23:17:23.076642    2659 log.go:172] (0xc000a64580) (0xc00071db80) Stream removed, broadcasting: 5\nI0826 23:17:23.076651    2659 log.go:172] (0xc000a64580) (0xc00071dc20) Stream removed, broadcasting: 7\n"
Aug 26 23:17:23.106: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:17:25.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-914" for this suite.

• [SLOW TEST:5.628 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run --rm job
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1843
    should create a job from an image, then delete the job [Deprecated] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Deprecated] [Conformance]","total":278,"completed":130,"skipped":1962,"failed":0}
SSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:17:25.173: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-map-5f02258d-a2d8-4728-87a3-da74761021e4
STEP: Creating a pod to test consume configMaps
Aug 26 23:17:25.510: INFO: Waiting up to 5m0s for pod "pod-configmaps-8089d596-359f-4925-84b4-cf1e37c6c5b5" in namespace "configmap-2534" to be "success or failure"
Aug 26 23:17:25.519: INFO: Pod "pod-configmaps-8089d596-359f-4925-84b4-cf1e37c6c5b5": Phase="Pending", Reason="", readiness=false. Elapsed: 9.476945ms
Aug 26 23:17:27.522: INFO: Pod "pod-configmaps-8089d596-359f-4925-84b4-cf1e37c6c5b5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01255837s
Aug 26 23:17:29.633: INFO: Pod "pod-configmaps-8089d596-359f-4925-84b4-cf1e37c6c5b5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.122841531s
Aug 26 23:17:32.147: INFO: Pod "pod-configmaps-8089d596-359f-4925-84b4-cf1e37c6c5b5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.637248226s
Aug 26 23:17:34.272: INFO: Pod "pod-configmaps-8089d596-359f-4925-84b4-cf1e37c6c5b5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.762435962s
Aug 26 23:17:36.386: INFO: Pod "pod-configmaps-8089d596-359f-4925-84b4-cf1e37c6c5b5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.875996677s
Aug 26 23:17:38.389: INFO: Pod "pod-configmaps-8089d596-359f-4925-84b4-cf1e37c6c5b5": Phase="Running", Reason="", readiness=true. Elapsed: 12.879546393s
Aug 26 23:17:40.393: INFO: Pod "pod-configmaps-8089d596-359f-4925-84b4-cf1e37c6c5b5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.883677987s
STEP: Saw pod success
Aug 26 23:17:40.394: INFO: Pod "pod-configmaps-8089d596-359f-4925-84b4-cf1e37c6c5b5" satisfied condition "success or failure"
Aug 26 23:17:40.396: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-8089d596-359f-4925-84b4-cf1e37c6c5b5 container configmap-volume-test: 
STEP: delete the pod
Aug 26 23:17:40.485: INFO: Waiting for pod pod-configmaps-8089d596-359f-4925-84b4-cf1e37c6c5b5 to disappear
Aug 26 23:17:40.517: INFO: Pod pod-configmaps-8089d596-359f-4925-84b4-cf1e37c6c5b5 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:17:40.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2534" for this suite.

• [SLOW TEST:15.352 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":131,"skipped":1968,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:17:40.525: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should run and stop simple daemon [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Aug 26 23:17:40.676: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 23:17:40.698: INFO: Number of nodes with available pods: 0
Aug 26 23:17:40.699: INFO: Node jerma-worker is running more than one daemon pod
Aug 26 23:17:41.706: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 23:17:41.823: INFO: Number of nodes with available pods: 0
Aug 26 23:17:41.823: INFO: Node jerma-worker is running more than one daemon pod
Aug 26 23:17:42.704: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 23:17:42.707: INFO: Number of nodes with available pods: 0
Aug 26 23:17:42.707: INFO: Node jerma-worker is running more than one daemon pod
Aug 26 23:17:44.106: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 23:17:45.058: INFO: Number of nodes with available pods: 0
Aug 26 23:17:45.058: INFO: Node jerma-worker is running more than one daemon pod
Aug 26 23:17:45.999: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 23:17:46.095: INFO: Number of nodes with available pods: 0
Aug 26 23:17:46.095: INFO: Node jerma-worker is running more than one daemon pod
Aug 26 23:17:46.814: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 23:17:46.999: INFO: Number of nodes with available pods: 1
Aug 26 23:17:46.999: INFO: Node jerma-worker is running more than one daemon pod
Aug 26 23:17:48.550: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 23:17:49.069: INFO: Number of nodes with available pods: 2
Aug 26 23:17:49.069: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Stop a daemon pod, check that the daemon pod is revived.
Aug 26 23:17:50.293: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 23:17:50.297: INFO: Number of nodes with available pods: 1
Aug 26 23:17:50.297: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 26 23:17:51.302: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 23:17:51.306: INFO: Number of nodes with available pods: 1
Aug 26 23:17:51.306: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 26 23:17:52.578: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 23:17:52.950: INFO: Number of nodes with available pods: 1
Aug 26 23:17:52.950: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 26 23:17:53.302: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 23:17:53.305: INFO: Number of nodes with available pods: 1
Aug 26 23:17:53.305: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 26 23:17:54.479: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 23:17:54.539: INFO: Number of nodes with available pods: 1
Aug 26 23:17:54.539: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 26 23:17:55.302: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 23:17:55.305: INFO: Number of nodes with available pods: 1
Aug 26 23:17:55.305: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 26 23:17:56.393: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 23:17:56.396: INFO: Number of nodes with available pods: 1
Aug 26 23:17:56.396: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 26 23:17:57.302: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 23:17:57.305: INFO: Number of nodes with available pods: 1
Aug 26 23:17:57.305: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 26 23:17:58.339: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 23:17:58.342: INFO: Number of nodes with available pods: 1
Aug 26 23:17:58.342: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 26 23:17:59.302: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 23:17:59.305: INFO: Number of nodes with available pods: 1
Aug 26 23:17:59.305: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 26 23:18:00.302: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 23:18:00.305: INFO: Number of nodes with available pods: 1
Aug 26 23:18:00.305: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 26 23:18:01.301: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 23:18:01.304: INFO: Number of nodes with available pods: 1
Aug 26 23:18:01.304: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 26 23:18:02.375: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 23:18:02.379: INFO: Number of nodes with available pods: 1
Aug 26 23:18:02.379: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 26 23:18:03.489: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 23:18:03.502: INFO: Number of nodes with available pods: 1
Aug 26 23:18:03.502: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 26 23:18:04.470: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 23:18:04.473: INFO: Number of nodes with available pods: 1
Aug 26 23:18:04.473: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 26 23:18:05.301: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 23:18:05.304: INFO: Number of nodes with available pods: 1
Aug 26 23:18:05.304: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 26 23:18:06.393: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 23:18:06.396: INFO: Number of nodes with available pods: 2
Aug 26 23:18:06.396: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5394, will wait for the garbage collector to delete the pods
Aug 26 23:18:06.457: INFO: Deleting DaemonSet.extensions daemon-set took: 6.524432ms
Aug 26 23:18:06.857: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.280828ms
Aug 26 23:18:11.660: INFO: Number of nodes with available pods: 0
Aug 26 23:18:11.660: INFO: Number of running nodes: 0, number of available pods: 0
Aug 26 23:18:11.671: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5394/daemonsets","resourceVersion":"4037310"},"items":null}

Aug 26 23:18:11.673: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5394/pods","resourceVersion":"4037310"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:18:11.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-5394" for this suite.

• [SLOW TEST:31.164 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop simple daemon [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":278,"completed":132,"skipped":1982,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:18:11.689: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Kubectl run default
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1490
[It] should create an rc or deployment from an image  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Aug 26 23:18:11.734: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-6704'
Aug 26 23:18:11.851: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Aug 26 23:18:11.851: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n"
STEP: verifying the pod controlled by e2e-test-httpd-deployment gets created
[AfterEach] Kubectl run default
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1496
Aug 26 23:18:11.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-6704'
Aug 26 23:18:12.018: INFO: stderr: ""
Aug 26 23:18:12.018: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:18:12.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6704" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image  [Conformance]","total":278,"completed":133,"skipped":2013,"failed":0}
SSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:18:12.070: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Aug 26 23:18:17.249: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:18:17.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-7625" for this suite.

• [SLOW TEST:5.353 seconds]
[k8s.io] Container Runtime
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  blackbox test
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":134,"skipped":2024,"failed":0}
SSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:18:17.423: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 26 23:18:17.521: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:18:18.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-3233" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works  [Conformance]","total":278,"completed":135,"skipped":2032,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:18:18.140: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD preserving unknown fields in an embedded object [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 26 23:18:18.307: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Aug 26 23:18:21.254: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-378 create -f -'
Aug 26 23:18:25.106: INFO: stderr: ""
Aug 26 23:18:25.106: INFO: stdout: "e2e-test-crd-publish-openapi-9129-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n"
Aug 26 23:18:25.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-378 delete e2e-test-crd-publish-openapi-9129-crds test-cr'
Aug 26 23:18:25.197: INFO: stderr: ""
Aug 26 23:18:25.197: INFO: stdout: "e2e-test-crd-publish-openapi-9129-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n"
Aug 26 23:18:25.197: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-378 apply -f -'
Aug 26 23:18:25.494: INFO: stderr: ""
Aug 26 23:18:25.494: INFO: stdout: "e2e-test-crd-publish-openapi-9129-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n"
Aug 26 23:18:25.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-378 delete e2e-test-crd-publish-openapi-9129-crds test-cr'
Aug 26 23:18:25.588: INFO: stderr: ""
Aug 26 23:18:25.588: INFO: stdout: "e2e-test-crd-publish-openapi-9129-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR
Aug 26 23:18:25.588: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9129-crds'
Aug 26 23:18:25.844: INFO: stderr: ""
Aug 26 23:18:25.844: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-9129-crd\nVERSION:  crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n     preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n   apiVersion\t\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t\n     Specification of Waldo\n\n   status\t\n     Status of Waldo\n\n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:18:27.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-378" for this suite.

• [SLOW TEST:9.592 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":278,"completed":136,"skipped":2050,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:18:27.732: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 26 23:18:27.818: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0a48c52d-6692-47f4-8f67-f409d66dec75" in namespace "projected-8139" to be "success or failure"
Aug 26 23:18:27.826: INFO: Pod "downwardapi-volume-0a48c52d-6692-47f4-8f67-f409d66dec75": Phase="Pending", Reason="", readiness=false. Elapsed: 8.156061ms
Aug 26 23:18:29.973: INFO: Pod "downwardapi-volume-0a48c52d-6692-47f4-8f67-f409d66dec75": Phase="Pending", Reason="", readiness=false. Elapsed: 2.154651553s
Aug 26 23:18:31.977: INFO: Pod "downwardapi-volume-0a48c52d-6692-47f4-8f67-f409d66dec75": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.158509495s
STEP: Saw pod success
Aug 26 23:18:31.977: INFO: Pod "downwardapi-volume-0a48c52d-6692-47f4-8f67-f409d66dec75" satisfied condition "success or failure"
Aug 26 23:18:31.980: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-0a48c52d-6692-47f4-8f67-f409d66dec75 container client-container: 
STEP: delete the pod
Aug 26 23:18:32.040: INFO: Waiting for pod downwardapi-volume-0a48c52d-6692-47f4-8f67-f409d66dec75 to disappear
Aug 26 23:18:32.065: INFO: Pod downwardapi-volume-0a48c52d-6692-47f4-8f67-f409d66dec75 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:18:32.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8139" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":137,"skipped":2073,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:18:32.072: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-8512
[It] Should recreate evicted statefulset [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-8512
STEP: Creating statefulset with conflicting port in namespace statefulset-8512
STEP: Waiting until pod test-pod will start running in namespace statefulset-8512
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-8512
Aug 26 23:18:36.343: INFO: Observed stateful pod in namespace: statefulset-8512, name: ss-0, uid: 1fb25707-5d7f-43b5-9e5c-b6c48b8bd27b, status phase: Pending. Waiting for statefulset controller to delete.
Aug 26 23:18:36.609: INFO: Observed stateful pod in namespace: statefulset-8512, name: ss-0, uid: 1fb25707-5d7f-43b5-9e5c-b6c48b8bd27b, status phase: Failed. Waiting for statefulset controller to delete.
Aug 26 23:18:36.639: INFO: Observed stateful pod in namespace: statefulset-8512, name: ss-0, uid: 1fb25707-5d7f-43b5-9e5c-b6c48b8bd27b, status phase: Failed. Waiting for statefulset controller to delete.
Aug 26 23:18:36.688: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-8512
STEP: Removing pod with conflicting port in namespace statefulset-8512
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-8512 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Aug 26 23:18:42.937: INFO: Deleting all statefulset in ns statefulset-8512
Aug 26 23:18:42.952: INFO: Scaling statefulset ss to 0
Aug 26 23:18:52.968: INFO: Waiting for statefulset status.replicas updated to 0
Aug 26 23:18:52.971: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:18:52.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-8512" for this suite.

• [SLOW TEST:20.934 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    Should recreate evicted statefulset [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":278,"completed":138,"skipped":2095,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:18:53.007: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name cm-test-opt-del-ad0c471b-5cca-444c-bf54-97895cc7b4d5
STEP: Creating configMap with name cm-test-opt-upd-7b47a19c-0c60-410d-ad43-b526da430129
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-ad0c471b-5cca-444c-bf54-97895cc7b4d5
STEP: Updating configmap cm-test-opt-upd-7b47a19c-0c60-410d-ad43-b526da430129
STEP: Creating configMap with name cm-test-opt-create-7fac8374-196d-43dd-82f2-9d44f6a998b4
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:19:01.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2240" for this suite.

• [SLOW TEST:8.173 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":139,"skipped":2121,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:19:01.180: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:19:33.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-5999" for this suite.
STEP: Destroying namespace "nsdeletetest-4311" for this suite.
Aug 26 23:19:33.503: INFO: Namespace nsdeletetest-4311 was already deleted
STEP: Destroying namespace "nsdeletetest-3127" for this suite.

• [SLOW TEST:32.326 seconds]
[sig-api-machinery] Namespaces [Serial]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":278,"completed":140,"skipped":2133,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Service endpoints latency
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:19:33.506: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 26 23:19:33.566: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating replication controller svc-latency-rc in namespace svc-latency-4645
I0826 23:19:33.582217       6 runners.go:189] Created replication controller with name: svc-latency-rc, namespace: svc-latency-4645, replica count: 1
I0826 23:19:34.632553       6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0826 23:19:35.632955       6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0826 23:19:36.633192       6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Aug 26 23:19:36.763: INFO: Created: latency-svc-rlbnf
Aug 26 23:19:36.781: INFO: Got endpoints: latency-svc-rlbnf [48.020033ms]
Aug 26 23:19:36.890: INFO: Created: latency-svc-qpg7m
Aug 26 23:19:36.954: INFO: Got endpoints: latency-svc-qpg7m [172.994631ms]
Aug 26 23:19:36.955: INFO: Created: latency-svc-bg45m
Aug 26 23:19:36.979: INFO: Got endpoints: latency-svc-bg45m [197.187772ms]
Aug 26 23:19:37.100: INFO: Created: latency-svc-26rxs
Aug 26 23:19:37.105: INFO: Got endpoints: latency-svc-26rxs [323.433336ms]
Aug 26 23:19:37.148: INFO: Created: latency-svc-lb297
Aug 26 23:19:37.164: INFO: Got endpoints: latency-svc-lb297 [382.753918ms]
Aug 26 23:19:37.183: INFO: Created: latency-svc-vjzk8
Aug 26 23:19:37.291: INFO: Got endpoints: latency-svc-vjzk8 [509.682251ms]
Aug 26 23:19:37.294: INFO: Created: latency-svc-mdhb5
Aug 26 23:19:37.351: INFO: Got endpoints: latency-svc-mdhb5 [569.793959ms]
Aug 26 23:19:37.471: INFO: Created: latency-svc-2jtn6
Aug 26 23:19:37.530: INFO: Got endpoints: latency-svc-2jtn6 [749.105491ms]
Aug 26 23:19:37.531: INFO: Created: latency-svc-2mdl7
Aug 26 23:19:37.704: INFO: Got endpoints: latency-svc-2mdl7 [923.09523ms]
Aug 26 23:19:37.707: INFO: Created: latency-svc-2qwlj
Aug 26 23:19:37.735: INFO: Got endpoints: latency-svc-2qwlj [953.933966ms]
Aug 26 23:19:37.791: INFO: Created: latency-svc-9wvmv
Aug 26 23:19:37.860: INFO: Got endpoints: latency-svc-9wvmv [1.079029905s]
Aug 26 23:19:37.910: INFO: Created: latency-svc-g7csl
Aug 26 23:19:37.927: INFO: Got endpoints: latency-svc-g7csl [1.145773567s]
Aug 26 23:19:38.006: INFO: Created: latency-svc-pd7jq
Aug 26 23:19:38.017: INFO: Got endpoints: latency-svc-pd7jq [1.235813204s]
Aug 26 23:19:38.054: INFO: Created: latency-svc-x87jb
Aug 26 23:19:38.065: INFO: Got endpoints: latency-svc-x87jb [1.284205267s]
Aug 26 23:19:38.135: INFO: Created: latency-svc-r5jjg
Aug 26 23:19:38.139: INFO: Got endpoints: latency-svc-r5jjg [1.357754341s]
Aug 26 23:19:38.210: INFO: Created: latency-svc-w6jn8
Aug 26 23:19:38.222: INFO: Got endpoints: latency-svc-w6jn8 [1.440625365s]
Aug 26 23:19:38.274: INFO: Created: latency-svc-2lqm5
Aug 26 23:19:38.307: INFO: Created: latency-svc-2rh4w
Aug 26 23:19:38.307: INFO: Got endpoints: latency-svc-2lqm5 [1.352784913s]
Aug 26 23:19:38.319: INFO: Got endpoints: latency-svc-2rh4w [1.339900322s]
Aug 26 23:19:38.361: INFO: Created: latency-svc-mxx4k
Aug 26 23:19:38.372: INFO: Got endpoints: latency-svc-mxx4k [1.266881853s]
Aug 26 23:19:38.417: INFO: Created: latency-svc-p8tp8
Aug 26 23:19:38.427: INFO: Got endpoints: latency-svc-p8tp8 [1.262519775s]
Aug 26 23:19:38.469: INFO: Created: latency-svc-hfsc6
Aug 26 23:19:38.474: INFO: Got endpoints: latency-svc-hfsc6 [1.182491469s]
Aug 26 23:19:38.505: INFO: Created: latency-svc-blfxn
Aug 26 23:19:38.517: INFO: Got endpoints: latency-svc-blfxn [1.165645643s]
Aug 26 23:19:38.583: INFO: Created: latency-svc-xgrxn
Aug 26 23:19:38.625: INFO: Got endpoints: latency-svc-xgrxn [1.094147113s]
Aug 26 23:19:38.735: INFO: Created: latency-svc-j7q6b
Aug 26 23:19:38.764: INFO: Got endpoints: latency-svc-j7q6b [1.059135678s]
Aug 26 23:19:38.873: INFO: Created: latency-svc-s4lsz
Aug 26 23:19:38.877: INFO: Got endpoints: latency-svc-s4lsz [1.141359732s]
Aug 26 23:19:38.938: INFO: Created: latency-svc-nfv4t
Aug 26 23:19:38.967: INFO: Got endpoints: latency-svc-nfv4t [1.106473588s]
Aug 26 23:19:39.071: INFO: Created: latency-svc-62cvb
Aug 26 23:19:39.081: INFO: Got endpoints: latency-svc-62cvb [1.153968173s]
Aug 26 23:19:39.149: INFO: Created: latency-svc-ktjj7
Aug 26 23:19:39.167: INFO: Got endpoints: latency-svc-ktjj7 [1.14930897s]
Aug 26 23:19:39.245: INFO: Created: latency-svc-5r848
Aug 26 23:19:39.363: INFO: Got endpoints: latency-svc-5r848 [1.297694101s]
Aug 26 23:19:39.642: INFO: Created: latency-svc-vpx7t
Aug 26 23:19:39.950: INFO: Got endpoints: latency-svc-vpx7t [1.811227928s]
Aug 26 23:19:40.156: INFO: Created: latency-svc-8nzh4
Aug 26 23:19:40.181: INFO: Got endpoints: latency-svc-8nzh4 [1.958767236s]
Aug 26 23:19:40.453: INFO: Created: latency-svc-gwh4p
Aug 26 23:19:40.675: INFO: Got endpoints: latency-svc-gwh4p [2.367885257s]
Aug 26 23:19:40.830: INFO: Created: latency-svc-ljtzj
Aug 26 23:19:40.843: INFO: Got endpoints: latency-svc-ljtzj [2.524245642s]
Aug 26 23:19:40.986: INFO: Created: latency-svc-522qh
Aug 26 23:19:41.030: INFO: Got endpoints: latency-svc-522qh [2.658386653s]
Aug 26 23:19:41.136: INFO: Created: latency-svc-kd2dq
Aug 26 23:19:41.139: INFO: Got endpoints: latency-svc-kd2dq [2.712124942s]
Aug 26 23:19:41.582: INFO: Created: latency-svc-mkn7m
Aug 26 23:19:41.587: INFO: Got endpoints: latency-svc-mkn7m [3.11330805s]
Aug 26 23:19:41.744: INFO: Created: latency-svc-65tlg
Aug 26 23:19:41.754: INFO: Got endpoints: latency-svc-65tlg [3.23742183s]
Aug 26 23:19:41.828: INFO: Created: latency-svc-qskj7
Aug 26 23:19:41.884: INFO: Got endpoints: latency-svc-qskj7 [3.259421872s]
Aug 26 23:19:41.901: INFO: Created: latency-svc-4kjzz
Aug 26 23:19:41.931: INFO: Got endpoints: latency-svc-4kjzz [3.166938854s]
Aug 26 23:19:41.972: INFO: Created: latency-svc-hrdsl
Aug 26 23:19:42.051: INFO: Got endpoints: latency-svc-hrdsl [3.174445686s]
Aug 26 23:19:42.081: INFO: Created: latency-svc-7sr9p
Aug 26 23:19:42.099: INFO: Got endpoints: latency-svc-7sr9p [3.13213719s]
Aug 26 23:19:42.147: INFO: Created: latency-svc-df894
Aug 26 23:19:42.190: INFO: Got endpoints: latency-svc-df894 [3.109001266s]
Aug 26 23:19:42.231: INFO: Created: latency-svc-8zp4l
Aug 26 23:19:42.260: INFO: Got endpoints: latency-svc-8zp4l [3.093654153s]
Aug 26 23:19:42.363: INFO: Created: latency-svc-qjj8c
Aug 26 23:19:42.367: INFO: Got endpoints: latency-svc-qjj8c [3.003702901s]
Aug 26 23:19:42.437: INFO: Created: latency-svc-5zzfm
Aug 26 23:19:42.559: INFO: Got endpoints: latency-svc-5zzfm [2.608466445s]
Aug 26 23:19:42.825: INFO: Created: latency-svc-lgxlz
Aug 26 23:19:42.872: INFO: Got endpoints: latency-svc-lgxlz [2.691121793s]
Aug 26 23:19:42.906: INFO: Created: latency-svc-j2zk9
Aug 26 23:19:42.998: INFO: Got endpoints: latency-svc-j2zk9 [2.323156743s]
Aug 26 23:19:43.086: INFO: Created: latency-svc-6xkv4
Aug 26 23:19:43.094: INFO: Got endpoints: latency-svc-6xkv4 [2.251570076s]
Aug 26 23:19:43.182: INFO: Created: latency-svc-692pn
Aug 26 23:19:43.190: INFO: Got endpoints: latency-svc-692pn [2.159688777s]
Aug 26 23:19:43.230: INFO: Created: latency-svc-vz4bt
Aug 26 23:19:43.238: INFO: Got endpoints: latency-svc-vz4bt [2.099612974s]
Aug 26 23:19:43.321: INFO: Created: latency-svc-x24vp
Aug 26 23:19:43.334: INFO: Got endpoints: latency-svc-x24vp [1.747201834s]
Aug 26 23:19:43.429: INFO: Created: latency-svc-pqwc6
Aug 26 23:19:43.507: INFO: Got endpoints: latency-svc-pqwc6 [1.752430616s]
Aug 26 23:19:43.507: INFO: Created: latency-svc-qxcx5
Aug 26 23:19:43.511: INFO: Got endpoints: latency-svc-qxcx5 [1.626943756s]
Aug 26 23:19:43.585: INFO: Created: latency-svc-ckd6p
Aug 26 23:19:43.600: INFO: Got endpoints: latency-svc-ckd6p [1.668907218s]
Aug 26 23:19:43.658: INFO: Created: latency-svc-4gjqk
Aug 26 23:19:43.711: INFO: Got endpoints: latency-svc-4gjqk [1.659214975s]
Aug 26 23:19:43.807: INFO: Created: latency-svc-skpt8
Aug 26 23:19:43.866: INFO: Got endpoints: latency-svc-skpt8 [1.767508734s]
Aug 26 23:19:43.909: INFO: Created: latency-svc-8zr2v
Aug 26 23:19:43.917: INFO: Got endpoints: latency-svc-8zr2v [1.727352773s]
Aug 26 23:19:44.009: INFO: Created: latency-svc-n7ztf
Aug 26 23:19:44.020: INFO: Got endpoints: latency-svc-n7ztf [1.759354836s]
Aug 26 23:19:44.059: INFO: Created: latency-svc-v6pzf
Aug 26 23:19:44.068: INFO: Got endpoints: latency-svc-v6pzf [1.700671364s]
Aug 26 23:19:44.161: INFO: Created: latency-svc-lcgjq
Aug 26 23:19:44.194: INFO: Got endpoints: latency-svc-lcgjq [1.63545944s]
Aug 26 23:19:44.324: INFO: Created: latency-svc-mzpvb
Aug 26 23:19:44.366: INFO: Got endpoints: latency-svc-mzpvb [1.493260411s]
Aug 26 23:19:44.415: INFO: Created: latency-svc-r687j
Aug 26 23:19:44.483: INFO: Got endpoints: latency-svc-r687j [1.484762055s]
Aug 26 23:19:44.581: INFO: Created: latency-svc-br768
Aug 26 23:19:44.652: INFO: Got endpoints: latency-svc-br768 [1.55722487s]
Aug 26 23:19:44.738: INFO: Created: latency-svc-q9ssq
Aug 26 23:19:44.830: INFO: Got endpoints: latency-svc-q9ssq [1.640134685s]
Aug 26 23:19:44.909: INFO: Created: latency-svc-48fkq
Aug 26 23:19:44.915: INFO: Got endpoints: latency-svc-48fkq [1.676605149s]
Aug 26 23:19:44.986: INFO: Created: latency-svc-wjwzw
Aug 26 23:19:45.022: INFO: Got endpoints: latency-svc-wjwzw [1.688103892s]
Aug 26 23:19:45.074: INFO: Created: latency-svc-hlj9t
Aug 26 23:19:45.177: INFO: Got endpoints: latency-svc-hlj9t [1.670746292s]
Aug 26 23:19:45.194: INFO: Created: latency-svc-g6m99
Aug 26 23:19:45.204: INFO: Got endpoints: latency-svc-g6m99 [1.692825133s]
Aug 26 23:19:45.263: INFO: Created: latency-svc-q696f
Aug 26 23:19:45.357: INFO: Got endpoints: latency-svc-q696f [1.7570705s]
Aug 26 23:19:45.370: INFO: Created: latency-svc-ljbjs
Aug 26 23:19:45.383: INFO: Got endpoints: latency-svc-ljbjs [1.672571906s]
Aug 26 23:19:45.501: INFO: Created: latency-svc-h5sgz
Aug 26 23:19:45.513: INFO: Got endpoints: latency-svc-h5sgz [1.646988991s]
Aug 26 23:19:45.639: INFO: Created: latency-svc-tg27v
Aug 26 23:19:45.642: INFO: Got endpoints: latency-svc-tg27v [1.724803415s]
Aug 26 23:19:45.718: INFO: Created: latency-svc-mjblx
Aug 26 23:19:45.725: INFO: Got endpoints: latency-svc-mjblx [1.705515236s]
Aug 26 23:19:45.836: INFO: Created: latency-svc-g6jhn
Aug 26 23:19:45.839: INFO: Got endpoints: latency-svc-g6jhn [1.771833047s]
Aug 26 23:19:45.882: INFO: Created: latency-svc-p2svx
Aug 26 23:19:45.962: INFO: Got endpoints: latency-svc-p2svx [1.767692312s]
Aug 26 23:19:46.007: INFO: Created: latency-svc-tmvm9
Aug 26 23:19:46.032: INFO: Got endpoints: latency-svc-tmvm9 [1.666321507s]
Aug 26 23:19:46.215: INFO: Created: latency-svc-4n48r
Aug 26 23:19:46.230: INFO: Got endpoints: latency-svc-4n48r [1.746816204s]
Aug 26 23:19:46.417: INFO: Created: latency-svc-6dx2v
Aug 26 23:19:46.419: INFO: Got endpoints: latency-svc-6dx2v [1.767693718s]
Aug 26 23:19:46.512: INFO: Created: latency-svc-s99kt
Aug 26 23:19:46.650: INFO: Got endpoints: latency-svc-s99kt [1.819715467s]
Aug 26 23:19:46.668: INFO: Created: latency-svc-pkm46
Aug 26 23:19:46.710: INFO: Got endpoints: latency-svc-pkm46 [1.795007941s]
Aug 26 23:19:46.843: INFO: Created: latency-svc-2nkwx
Aug 26 23:19:46.879: INFO: Got endpoints: latency-svc-2nkwx [1.856149947s]
Aug 26 23:19:47.046: INFO: Created: latency-svc-5bw6h
Aug 26 23:19:47.048: INFO: Got endpoints: latency-svc-5bw6h [1.870936003s]
Aug 26 23:19:47.885: INFO: Created: latency-svc-2rz2m
Aug 26 23:19:48.340: INFO: Got endpoints: latency-svc-2rz2m [3.135714465s]
Aug 26 23:19:49.015: INFO: Created: latency-svc-mqgmj
Aug 26 23:19:49.359: INFO: Got endpoints: latency-svc-mqgmj [4.001860428s]
Aug 26 23:19:49.591: INFO: Created: latency-svc-8zzfw
Aug 26 23:19:49.849: INFO: Got endpoints: latency-svc-8zzfw [4.465321391s]
Aug 26 23:19:50.089: INFO: Created: latency-svc-f27bw
Aug 26 23:19:50.148: INFO: Got endpoints: latency-svc-f27bw [4.634863532s]
Aug 26 23:19:50.237: INFO: Created: latency-svc-q9zt4
Aug 26 23:19:50.280: INFO: Got endpoints: latency-svc-q9zt4 [4.637842825s]
Aug 26 23:19:50.435: INFO: Created: latency-svc-ph5zq
Aug 26 23:19:50.490: INFO: Got endpoints: latency-svc-ph5zq [4.764497113s]
Aug 26 23:19:50.691: INFO: Created: latency-svc-f9rss
Aug 26 23:19:51.130: INFO: Got endpoints: latency-svc-f9rss [5.290599616s]
Aug 26 23:19:51.669: INFO: Created: latency-svc-nx4sc
Aug 26 23:19:51.672: INFO: Got endpoints: latency-svc-nx4sc [5.7100532s]
Aug 26 23:19:52.126: INFO: Created: latency-svc-nnc2w
Aug 26 23:19:52.150: INFO: Got endpoints: latency-svc-nnc2w [6.117656838s]
Aug 26 23:19:52.699: INFO: Created: latency-svc-c68ts
Aug 26 23:19:53.021: INFO: Got endpoints: latency-svc-c68ts [6.791429769s]
Aug 26 23:19:53.024: INFO: Created: latency-svc-5222v
Aug 26 23:19:53.123: INFO: Got endpoints: latency-svc-5222v [6.703817039s]
Aug 26 23:19:53.126: INFO: Created: latency-svc-kfl25
Aug 26 23:19:53.176: INFO: Got endpoints: latency-svc-kfl25 [6.525706052s]
Aug 26 23:19:53.555: INFO: Created: latency-svc-6gn5p
Aug 26 23:19:53.609: INFO: Got endpoints: latency-svc-6gn5p [6.898939179s]
Aug 26 23:19:53.704: INFO: Created: latency-svc-nmwx8
Aug 26 23:19:53.721: INFO: Got endpoints: latency-svc-nmwx8 [6.84206144s]
Aug 26 23:19:53.790: INFO: Created: latency-svc-pzwv4
Aug 26 23:19:53.872: INFO: Got endpoints: latency-svc-pzwv4 [6.823719991s]
Aug 26 23:19:53.874: INFO: Created: latency-svc-vv7z6
Aug 26 23:19:53.904: INFO: Got endpoints: latency-svc-vv7z6 [5.56429024s]
Aug 26 23:19:53.959: INFO: Created: latency-svc-fddvz
Aug 26 23:19:54.070: INFO: Got endpoints: latency-svc-fddvz [4.710857058s]
Aug 26 23:19:54.097: INFO: Created: latency-svc-8jbst
Aug 26 23:19:54.147: INFO: Got endpoints: latency-svc-8jbst [4.297861914s]
Aug 26 23:19:54.247: INFO: Created: latency-svc-5rm26
Aug 26 23:19:54.261: INFO: Got endpoints: latency-svc-5rm26 [4.11250361s]
Aug 26 23:19:54.295: INFO: Created: latency-svc-xht74
Aug 26 23:19:54.303: INFO: Got endpoints: latency-svc-xht74 [4.022904257s]
Aug 26 23:19:54.325: INFO: Created: latency-svc-qnhq9
Aug 26 23:19:54.375: INFO: Got endpoints: latency-svc-qnhq9 [3.885503405s]
Aug 26 23:19:54.421: INFO: Created: latency-svc-w56q5
Aug 26 23:19:54.436: INFO: Got endpoints: latency-svc-w56q5 [3.306345744s]
Aug 26 23:19:54.458: INFO: Created: latency-svc-d9rcv
Aug 26 23:19:54.472: INFO: Got endpoints: latency-svc-d9rcv [2.799349862s]
Aug 26 23:19:54.525: INFO: Created: latency-svc-rg95c
Aug 26 23:19:54.544: INFO: Got endpoints: latency-svc-rg95c [2.394282179s]
Aug 26 23:19:54.601: INFO: Created: latency-svc-6xc4t
Aug 26 23:19:54.771: INFO: Got endpoints: latency-svc-6xc4t [1.749332157s]
Aug 26 23:19:54.794: INFO: Created: latency-svc-89w2d
Aug 26 23:19:54.820: INFO: Got endpoints: latency-svc-89w2d [1.696636288s]
Aug 26 23:19:54.859: INFO: Created: latency-svc-pcnzl
Aug 26 23:19:54.868: INFO: Got endpoints: latency-svc-pcnzl [1.691837609s]
Aug 26 23:19:54.956: INFO: Created: latency-svc-rbb8m
Aug 26 23:19:54.982: INFO: Got endpoints: latency-svc-rbb8m [1.373191793s]
Aug 26 23:19:55.023: INFO: Created: latency-svc-nplmt
Aug 26 23:19:55.111: INFO: Got endpoints: latency-svc-nplmt [1.390546427s]
Aug 26 23:19:55.123: INFO: Created: latency-svc-xl9rw
Aug 26 23:19:55.273: INFO: Got endpoints: latency-svc-xl9rw [1.401155491s]
Aug 26 23:19:55.682: INFO: Created: latency-svc-p78r2
Aug 26 23:19:55.764: INFO: Got endpoints: latency-svc-p78r2 [1.859897083s]
Aug 26 23:19:55.978: INFO: Created: latency-svc-jdxvc
Aug 26 23:19:56.268: INFO: Got endpoints: latency-svc-jdxvc [2.198219873s]
Aug 26 23:19:56.270: INFO: Created: latency-svc-6wxbk
Aug 26 23:19:56.314: INFO: Got endpoints: latency-svc-6wxbk [2.167668265s]
Aug 26 23:19:56.457: INFO: Created: latency-svc-jsr8q
Aug 26 23:19:56.499: INFO: Got endpoints: latency-svc-jsr8q [2.238180034s]
Aug 26 23:19:56.639: INFO: Created: latency-svc-h7qv7
Aug 26 23:19:56.702: INFO: Got endpoints: latency-svc-h7qv7 [2.398953939s]
Aug 26 23:19:56.702: INFO: Created: latency-svc-ngg66
Aug 26 23:19:56.780: INFO: Got endpoints: latency-svc-ngg66 [2.40447071s]
Aug 26 23:19:56.841: INFO: Created: latency-svc-n8tcb
Aug 26 23:19:56.854: INFO: Got endpoints: latency-svc-n8tcb [2.417469781s]
Aug 26 23:19:56.920: INFO: Created: latency-svc-2fb6l
Aug 26 23:19:56.932: INFO: Got endpoints: latency-svc-2fb6l [2.460264607s]
Aug 26 23:19:57.002: INFO: Created: latency-svc-lp7hd
Aug 26 23:19:57.310: INFO: Got endpoints: latency-svc-lp7hd [2.76574053s]
Aug 26 23:19:57.313: INFO: Created: latency-svc-hr6hs
Aug 26 23:19:57.387: INFO: Got endpoints: latency-svc-hr6hs [2.616387286s]
Aug 26 23:19:57.477: INFO: Created: latency-svc-n98l7
Aug 26 23:19:57.524: INFO: Got endpoints: latency-svc-n98l7 [2.704384747s]
Aug 26 23:19:57.525: INFO: Created: latency-svc-drpmt
Aug 26 23:19:57.545: INFO: Got endpoints: latency-svc-drpmt [2.677205658s]
Aug 26 23:19:57.657: INFO: Created: latency-svc-9k7k6
Aug 26 23:19:57.705: INFO: Got endpoints: latency-svc-9k7k6 [2.722202153s]
Aug 26 23:19:57.705: INFO: Created: latency-svc-d4jjj
Aug 26 23:19:57.742: INFO: Got endpoints: latency-svc-d4jjj [2.630835008s]
Aug 26 23:19:58.004: INFO: Created: latency-svc-9cxzx
Aug 26 23:19:58.096: INFO: Got endpoints: latency-svc-9cxzx [2.822323795s]
Aug 26 23:19:58.202: INFO: Created: latency-svc-f45bb
Aug 26 23:19:58.387: INFO: Got endpoints: latency-svc-f45bb [2.622737404s]
Aug 26 23:19:58.395: INFO: Created: latency-svc-246sx
Aug 26 23:19:58.444: INFO: Got endpoints: latency-svc-246sx [2.175927578s]
Aug 26 23:19:58.931: INFO: Created: latency-svc-g2gx6
Aug 26 23:19:58.941: INFO: Got endpoints: latency-svc-g2gx6 [2.626579877s]
Aug 26 23:19:59.248: INFO: Created: latency-svc-k8vt5
Aug 26 23:19:59.296: INFO: Got endpoints: latency-svc-k8vt5 [2.797183542s]
Aug 26 23:19:59.434: INFO: Created: latency-svc-vngf9
Aug 26 23:19:59.903: INFO: Got endpoints: latency-svc-vngf9 [3.200526664s]
Aug 26 23:19:59.904: INFO: Created: latency-svc-sx5mc
Aug 26 23:19:59.925: INFO: Got endpoints: latency-svc-sx5mc [3.145294261s]
Aug 26 23:20:00.375: INFO: Created: latency-svc-726sn
Aug 26 23:20:00.422: INFO: Got endpoints: latency-svc-726sn [3.568035932s]
Aug 26 23:20:00.637: INFO: Created: latency-svc-rwfz4
Aug 26 23:20:00.662: INFO: Got endpoints: latency-svc-rwfz4 [3.72967687s]
Aug 26 23:20:01.827: INFO: Created: latency-svc-4897v
Aug 26 23:20:01.896: INFO: Got endpoints: latency-svc-4897v [4.585920027s]
Aug 26 23:20:02.118: INFO: Created: latency-svc-x766r
Aug 26 23:20:02.154: INFO: Got endpoints: latency-svc-x766r [4.766810588s]
Aug 26 23:20:02.459: INFO: Created: latency-svc-fhm44
Aug 26 23:20:02.543: INFO: Got endpoints: latency-svc-fhm44 [5.0185397s]
Aug 26 23:20:02.664: INFO: Created: latency-svc-p6jg2
Aug 26 23:20:02.866: INFO: Got endpoints: latency-svc-p6jg2 [5.320750366s]
Aug 26 23:20:02.869: INFO: Created: latency-svc-kc8v4
Aug 26 23:20:02.957: INFO: Got endpoints: latency-svc-kc8v4 [5.252519225s]
Aug 26 23:20:03.114: INFO: Created: latency-svc-n9dw9
Aug 26 23:20:03.154: INFO: Got endpoints: latency-svc-n9dw9 [5.411976625s]
Aug 26 23:20:03.279: INFO: Created: latency-svc-k629q
Aug 26 23:20:03.282: INFO: Got endpoints: latency-svc-k629q [5.186297376s]
Aug 26 23:20:03.373: INFO: Created: latency-svc-svwl2
Aug 26 23:20:03.448: INFO: Got endpoints: latency-svc-svwl2 [5.060685271s]
Aug 26 23:20:03.481: INFO: Created: latency-svc-bm9wk
Aug 26 23:20:03.494: INFO: Got endpoints: latency-svc-bm9wk [5.049860947s]
Aug 26 23:20:03.542: INFO: Created: latency-svc-2tn6g
Aug 26 23:20:03.630: INFO: Created: latency-svc-vxbws
Aug 26 23:20:03.631: INFO: Got endpoints: latency-svc-2tn6g [4.690172085s]
Aug 26 23:20:03.657: INFO: Got endpoints: latency-svc-vxbws [4.360113581s]
Aug 26 23:20:03.697: INFO: Created: latency-svc-bp59v
Aug 26 23:20:03.770: INFO: Got endpoints: latency-svc-bp59v [3.867248362s]
Aug 26 23:20:03.787: INFO: Created: latency-svc-pjzlt
Aug 26 23:20:03.810: INFO: Got endpoints: latency-svc-pjzlt [3.885131589s]
Aug 26 23:20:03.859: INFO: Created: latency-svc-85xqd
Aug 26 23:20:03.866: INFO: Got endpoints: latency-svc-85xqd [3.44407063s]
Aug 26 23:20:03.950: INFO: Created: latency-svc-w5t9f
Aug 26 23:20:03.963: INFO: Got endpoints: latency-svc-w5t9f [3.300981157s]
Aug 26 23:20:04.006: INFO: Created: latency-svc-6ldqt
Aug 26 23:20:04.030: INFO: Got endpoints: latency-svc-6ldqt [2.13425475s]
Aug 26 23:20:04.118: INFO: Created: latency-svc-g4mrj
Aug 26 23:20:04.121: INFO: Got endpoints: latency-svc-g4mrj [1.966737175s]
Aug 26 23:20:04.165: INFO: Created: latency-svc-7s8dc
Aug 26 23:20:04.197: INFO: Got endpoints: latency-svc-7s8dc [1.654376838s]
Aug 26 23:20:04.291: INFO: Created: latency-svc-qhbv9
Aug 26 23:20:04.318: INFO: Got endpoints: latency-svc-qhbv9 [1.452102485s]
Aug 26 23:20:04.358: INFO: Created: latency-svc-j78nr
Aug 26 23:20:04.429: INFO: Got endpoints: latency-svc-j78nr [1.47203926s]
Aug 26 23:20:04.500: INFO: Created: latency-svc-6jxjp
Aug 26 23:20:04.528: INFO: Got endpoints: latency-svc-6jxjp [1.37333919s]
Aug 26 23:20:04.627: INFO: Created: latency-svc-7j5jt
Aug 26 23:20:04.642: INFO: Got endpoints: latency-svc-7j5jt [1.359487743s]
Aug 26 23:20:04.918: INFO: Created: latency-svc-wqrrr
Aug 26 23:20:04.942: INFO: Got endpoints: latency-svc-wqrrr [1.494368682s]
Aug 26 23:20:05.130: INFO: Created: latency-svc-l72ln
Aug 26 23:20:05.142: INFO: Got endpoints: latency-svc-l72ln [1.648264946s]
Aug 26 23:20:05.220: INFO: Created: latency-svc-bnxj4
Aug 26 23:20:05.286: INFO: Got endpoints: latency-svc-bnxj4 [1.65477403s]
Aug 26 23:20:05.317: INFO: Created: latency-svc-r2pbn
Aug 26 23:20:05.326: INFO: Got endpoints: latency-svc-r2pbn [1.669266439s]
Aug 26 23:20:05.359: INFO: Created: latency-svc-gzc7l
Aug 26 23:20:05.362: INFO: Got endpoints: latency-svc-gzc7l [1.591743093s]
Aug 26 23:20:05.423: INFO: Created: latency-svc-k9279
Aug 26 23:20:05.425: INFO: Got endpoints: latency-svc-k9279 [1.614934915s]
Aug 26 23:20:05.466: INFO: Created: latency-svc-4cbq2
Aug 26 23:20:05.495: INFO: Got endpoints: latency-svc-4cbq2 [1.628597747s]
Aug 26 23:20:05.521: INFO: Created: latency-svc-zkh58
Aug 26 23:20:05.597: INFO: Got endpoints: latency-svc-zkh58 [1.633576837s]
Aug 26 23:20:05.598: INFO: Created: latency-svc-xmgdr
Aug 26 23:20:05.609: INFO: Got endpoints: latency-svc-xmgdr [1.578944228s]
Aug 26 23:20:05.640: INFO: Created: latency-svc-l7r4b
Aug 26 23:20:05.658: INFO: Got endpoints: latency-svc-l7r4b [1.536555505s]
Aug 26 23:20:05.682: INFO: Created: latency-svc-9mb48
Aug 26 23:20:05.734: INFO: Got endpoints: latency-svc-9mb48 [1.536669522s]
Aug 26 23:20:05.754: INFO: Created: latency-svc-7l9d9
Aug 26 23:20:05.772: INFO: Got endpoints: latency-svc-7l9d9 [1.453733002s]
Aug 26 23:20:05.796: INFO: Created: latency-svc-wgkhv
Aug 26 23:20:05.814: INFO: Got endpoints: latency-svc-wgkhv [1.384744271s]
Aug 26 23:20:05.933: INFO: Created: latency-svc-65n76
Aug 26 23:20:05.977: INFO: Got endpoints: latency-svc-65n76 [1.448832858s]
Aug 26 23:20:06.094: INFO: Created: latency-svc-xdd2x
Aug 26 23:20:06.132: INFO: Got endpoints: latency-svc-xdd2x [1.490479457s]
Aug 26 23:20:06.133: INFO: Created: latency-svc-hbppb
Aug 26 23:20:06.156: INFO: Got endpoints: latency-svc-hbppb [1.21411201s]
Aug 26 23:20:06.193: INFO: Created: latency-svc-955ll
Aug 26 23:20:06.267: INFO: Got endpoints: latency-svc-955ll [1.125318658s]
Aug 26 23:20:06.301: INFO: Created: latency-svc-v9pnl
Aug 26 23:20:06.355: INFO: Got endpoints: latency-svc-v9pnl [1.068585234s]
Aug 26 23:20:06.423: INFO: Created: latency-svc-pkq8x
Aug 26 23:20:06.468: INFO: Got endpoints: latency-svc-pkq8x [1.142479209s]
Aug 26 23:20:06.517: INFO: Created: latency-svc-z9w47
Aug 26 23:20:06.573: INFO: Got endpoints: latency-svc-z9w47 [1.210726198s]
Aug 26 23:20:06.589: INFO: Created: latency-svc-ggnsk
Aug 26 23:20:06.607: INFO: Got endpoints: latency-svc-ggnsk [1.18173984s]
Aug 26 23:20:06.631: INFO: Created: latency-svc-sfcmq
Aug 26 23:20:06.655: INFO: Got endpoints: latency-svc-sfcmq [1.160588272s]
Aug 26 23:20:06.721: INFO: Created: latency-svc-s96gk
Aug 26 23:20:06.740: INFO: Got endpoints: latency-svc-s96gk [1.14312806s]
Aug 26 23:20:06.782: INFO: Created: latency-svc-k6th2
Aug 26 23:20:06.800: INFO: Got endpoints: latency-svc-k6th2 [1.190605567s]
Aug 26 23:20:06.854: INFO: Created: latency-svc-4cvsj
Aug 26 23:20:06.889: INFO: Created: latency-svc-wl4ps
Aug 26 23:20:06.889: INFO: Got endpoints: latency-svc-4cvsj [1.231608684s]
Aug 26 23:20:06.902: INFO: Got endpoints: latency-svc-wl4ps [1.168072458s]
Aug 26 23:20:06.925: INFO: Created: latency-svc-l8vrd
Aug 26 23:20:06.938: INFO: Got endpoints: latency-svc-l8vrd [1.166362867s]
Aug 26 23:20:06.999: INFO: Created: latency-svc-gdqdv
Aug 26 23:20:07.002: INFO: Got endpoints: latency-svc-gdqdv [1.187841992s]
Aug 26 23:20:07.051: INFO: Created: latency-svc-8n2sb
Aug 26 23:20:07.168: INFO: Got endpoints: latency-svc-8n2sb [1.191073317s]
Aug 26 23:20:07.169: INFO: Created: latency-svc-tkkvc
Aug 26 23:20:07.192: INFO: Got endpoints: latency-svc-tkkvc [1.059721444s]
Aug 26 23:20:07.221: INFO: Created: latency-svc-xflfx
Aug 26 23:20:07.233: INFO: Got endpoints: latency-svc-xflfx [1.076747513s]
Aug 26 23:20:07.394: INFO: Created: latency-svc-w4bpl
Aug 26 23:20:07.428: INFO: Got endpoints: latency-svc-w4bpl [1.160308086s]
Aug 26 23:20:07.489: INFO: Created: latency-svc-cs69w
Aug 26 23:20:07.537: INFO: Got endpoints: latency-svc-cs69w [1.182057881s]
Aug 26 23:20:07.555: INFO: Created: latency-svc-fwcs6
Aug 26 23:20:07.585: INFO: Got endpoints: latency-svc-fwcs6 [1.116755501s]
Aug 26 23:20:07.615: INFO: Created: latency-svc-5q9zw
Aug 26 23:20:07.632: INFO: Got endpoints: latency-svc-5q9zw [1.059644858s]
Aug 26 23:20:07.693: INFO: Created: latency-svc-kb8ds
Aug 26 23:20:07.699: INFO: Got endpoints: latency-svc-kb8ds [1.091572508s]
Aug 26 23:20:07.772: INFO: Created: latency-svc-k8g7h
Aug 26 23:20:07.849: INFO: Got endpoints: latency-svc-k8g7h [1.193032275s]
Aug 26 23:20:07.880: INFO: Created: latency-svc-qcqp2
Aug 26 23:20:08.454: INFO: Got endpoints: latency-svc-qcqp2 [1.713998317s]
Aug 26 23:20:08.495: INFO: Created: latency-svc-rg7vp
Aug 26 23:20:08.710: INFO: Got endpoints: latency-svc-rg7vp [1.91067032s]
Aug 26 23:20:08.774: INFO: Created: latency-svc-spfmv
Aug 26 23:20:08.926: INFO: Got endpoints: latency-svc-spfmv [2.036554366s]
Aug 26 23:20:09.019: INFO: Created: latency-svc-mqkph
Aug 26 23:20:09.076: INFO: Got endpoints: latency-svc-mqkph [2.17350217s]
Aug 26 23:20:09.122: INFO: Created: latency-svc-7wtc8
Aug 26 23:20:09.131: INFO: Got endpoints: latency-svc-7wtc8 [2.192977427s]
Aug 26 23:20:09.207: INFO: Created: latency-svc-tq9hd
Aug 26 23:20:09.242: INFO: Got endpoints: latency-svc-tq9hd [2.240134282s]
Aug 26 23:20:09.243: INFO: Created: latency-svc-x4q4b
Aug 26 23:20:09.252: INFO: Got endpoints: latency-svc-x4q4b [2.084133583s]
Aug 26 23:20:09.252: INFO: Latencies: [172.994631ms 197.187772ms 323.433336ms 382.753918ms 509.682251ms 569.793959ms 749.105491ms 923.09523ms 953.933966ms 1.059135678s 1.059644858s 1.059721444s 1.068585234s 1.076747513s 1.079029905s 1.091572508s 1.094147113s 1.106473588s 1.116755501s 1.125318658s 1.141359732s 1.142479209s 1.14312806s 1.145773567s 1.14930897s 1.153968173s 1.160308086s 1.160588272s 1.165645643s 1.166362867s 1.168072458s 1.18173984s 1.182057881s 1.182491469s 1.187841992s 1.190605567s 1.191073317s 1.193032275s 1.210726198s 1.21411201s 1.231608684s 1.235813204s 1.262519775s 1.266881853s 1.284205267s 1.297694101s 1.339900322s 1.352784913s 1.357754341s 1.359487743s 1.373191793s 1.37333919s 1.384744271s 1.390546427s 1.401155491s 1.440625365s 1.448832858s 1.452102485s 1.453733002s 1.47203926s 1.484762055s 1.490479457s 1.493260411s 1.494368682s 1.536555505s 1.536669522s 1.55722487s 1.578944228s 1.591743093s 1.614934915s 1.626943756s 1.628597747s 1.633576837s 1.63545944s 1.640134685s 1.646988991s 1.648264946s 1.654376838s 1.65477403s 1.659214975s 1.666321507s 1.668907218s 1.669266439s 1.670746292s 1.672571906s 1.676605149s 1.688103892s 1.691837609s 1.692825133s 1.696636288s 1.700671364s 1.705515236s 1.713998317s 1.724803415s 1.727352773s 1.746816204s 1.747201834s 1.749332157s 1.752430616s 1.7570705s 1.759354836s 1.767508734s 1.767692312s 1.767693718s 1.771833047s 1.795007941s 1.811227928s 1.819715467s 1.856149947s 1.859897083s 1.870936003s 1.91067032s 1.958767236s 1.966737175s 2.036554366s 2.084133583s 2.099612974s 2.13425475s 2.159688777s 2.167668265s 2.17350217s 2.175927578s 2.192977427s 2.198219873s 2.238180034s 2.240134282s 2.251570076s 2.323156743s 2.367885257s 2.394282179s 2.398953939s 2.40447071s 2.417469781s 2.460264607s 2.524245642s 2.608466445s 2.616387286s 2.622737404s 2.626579877s 2.630835008s 2.658386653s 2.677205658s 2.691121793s 2.704384747s 2.712124942s 2.722202153s 2.76574053s 2.797183542s 2.799349862s 2.822323795s 3.003702901s 3.093654153s 3.109001266s 3.11330805s 3.13213719s 3.135714465s 3.145294261s 3.166938854s 3.174445686s 3.200526664s 3.23742183s 3.259421872s 3.300981157s 3.306345744s 3.44407063s 3.568035932s 3.72967687s 3.867248362s 3.885131589s 3.885503405s 4.001860428s 4.022904257s 4.11250361s 4.297861914s 4.360113581s 4.465321391s 4.585920027s 4.634863532s 4.637842825s 4.690172085s 4.710857058s 4.764497113s 4.766810588s 5.0185397s 5.049860947s 5.060685271s 5.186297376s 5.252519225s 5.290599616s 5.320750366s 5.411976625s 5.56429024s 5.7100532s 6.117656838s 6.525706052s 6.703817039s 6.791429769s 6.823719991s 6.84206144s 6.898939179s]
Aug 26 23:20:09.252: INFO: 50 %ile: 1.759354836s
Aug 26 23:20:09.252: INFO: 90 %ile: 4.710857058s
Aug 26 23:20:09.252: INFO: 99 %ile: 6.84206144s
Aug 26 23:20:09.252: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:20:09.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svc-latency-4645" for this suite.

• [SLOW TEST:35.757 seconds]
[sig-network] Service endpoints latency
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should not be very high  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Service endpoints latency should not be very high  [Conformance]","total":278,"completed":141,"skipped":2148,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:20:09.264: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 26 23:20:09.761: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:20:18.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2618" for this suite.

• [SLOW TEST:9.387 seconds]
[k8s.io] Pods
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":278,"completed":142,"skipped":2216,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:20:18.651: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name projected-secret-test-map-da2d28db-7328-415b-b9d0-0c0b407a725c
STEP: Creating a pod to test consume secrets
Aug 26 23:20:18.778: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-59096232-141d-44e9-acf4-93a2aa85e0f3" in namespace "projected-4266" to be "success or failure"
Aug 26 23:20:18.783: INFO: Pod "pod-projected-secrets-59096232-141d-44e9-acf4-93a2aa85e0f3": Phase="Pending", Reason="", readiness=false. Elapsed: 5.38323ms
Aug 26 23:20:21.442: INFO: Pod "pod-projected-secrets-59096232-141d-44e9-acf4-93a2aa85e0f3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.664493864s
Aug 26 23:20:23.493: INFO: Pod "pod-projected-secrets-59096232-141d-44e9-acf4-93a2aa85e0f3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.715127869s
Aug 26 23:20:25.621: INFO: Pod "pod-projected-secrets-59096232-141d-44e9-acf4-93a2aa85e0f3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.843518882s
STEP: Saw pod success
Aug 26 23:20:25.621: INFO: Pod "pod-projected-secrets-59096232-141d-44e9-acf4-93a2aa85e0f3" satisfied condition "success or failure"
Aug 26 23:20:26.186: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-59096232-141d-44e9-acf4-93a2aa85e0f3 container projected-secret-volume-test: 
STEP: delete the pod
Aug 26 23:20:26.843: INFO: Waiting for pod pod-projected-secrets-59096232-141d-44e9-acf4-93a2aa85e0f3 to disappear
Aug 26 23:20:26.894: INFO: Pod pod-projected-secrets-59096232-141d-44e9-acf4-93a2aa85e0f3 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:20:26.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4266" for this suite.

• [SLOW TEST:8.884 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":143,"skipped":2268,"failed":0}
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:20:27.535: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0666 on tmpfs
Aug 26 23:20:28.897: INFO: Waiting up to 5m0s for pod "pod-6fb0c42e-51dc-471d-ad8c-c28c4518bc3c" in namespace "emptydir-9454" to be "success or failure"
Aug 26 23:20:28.901: INFO: Pod "pod-6fb0c42e-51dc-471d-ad8c-c28c4518bc3c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.483256ms
Aug 26 23:20:31.383: INFO: Pod "pod-6fb0c42e-51dc-471d-ad8c-c28c4518bc3c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.48593753s
Aug 26 23:20:33.489: INFO: Pod "pod-6fb0c42e-51dc-471d-ad8c-c28c4518bc3c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.592431487s
Aug 26 23:20:35.525: INFO: Pod "pod-6fb0c42e-51dc-471d-ad8c-c28c4518bc3c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.628379687s
STEP: Saw pod success
Aug 26 23:20:35.525: INFO: Pod "pod-6fb0c42e-51dc-471d-ad8c-c28c4518bc3c" satisfied condition "success or failure"
Aug 26 23:20:35.528: INFO: Trying to get logs from node jerma-worker pod pod-6fb0c42e-51dc-471d-ad8c-c28c4518bc3c container test-container: 
STEP: delete the pod
Aug 26 23:20:36.001: INFO: Waiting for pod pod-6fb0c42e-51dc-471d-ad8c-c28c4518bc3c to disappear
Aug 26 23:20:36.056: INFO: Pod pod-6fb0c42e-51dc-471d-ad8c-c28c4518bc3c no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:20:36.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9454" for this suite.

• [SLOW TEST:8.601 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":144,"skipped":2268,"failed":0}
SSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:20:36.136: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0644 on node default medium
Aug 26 23:20:36.930: INFO: Waiting up to 5m0s for pod "pod-2ffa71de-a205-491f-913f-2befb1dde1a7" in namespace "emptydir-6300" to be "success or failure"
Aug 26 23:20:37.146: INFO: Pod "pod-2ffa71de-a205-491f-913f-2befb1dde1a7": Phase="Pending", Reason="", readiness=false. Elapsed: 215.473442ms
Aug 26 23:20:39.464: INFO: Pod "pod-2ffa71de-a205-491f-913f-2befb1dde1a7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.533649386s
Aug 26 23:20:42.311: INFO: Pod "pod-2ffa71de-a205-491f-913f-2befb1dde1a7": Phase="Pending", Reason="", readiness=false. Elapsed: 5.380529172s
Aug 26 23:20:44.352: INFO: Pod "pod-2ffa71de-a205-491f-913f-2befb1dde1a7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 7.421559778s
STEP: Saw pod success
Aug 26 23:20:44.352: INFO: Pod "pod-2ffa71de-a205-491f-913f-2befb1dde1a7" satisfied condition "success or failure"
Aug 26 23:20:44.355: INFO: Trying to get logs from node jerma-worker pod pod-2ffa71de-a205-491f-913f-2befb1dde1a7 container test-container: 
STEP: delete the pod
Aug 26 23:20:45.091: INFO: Waiting for pod pod-2ffa71de-a205-491f-913f-2befb1dde1a7 to disappear
Aug 26 23:20:45.113: INFO: Pod pod-2ffa71de-a205-491f-913f-2befb1dde1a7 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:20:45.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6300" for this suite.

• [SLOW TEST:9.122 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":145,"skipped":2274,"failed":0}
SSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:20:45.258: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-c188fdbe-3545-4906-b695-4e1bfb1bee22
STEP: Creating a pod to test consume secrets
Aug 26 23:20:45.466: INFO: Waiting up to 5m0s for pod "pod-secrets-302cc6bf-b439-4258-99f1-d2348bfbd8bf" in namespace "secrets-2065" to be "success or failure"
Aug 26 23:20:45.777: INFO: Pod "pod-secrets-302cc6bf-b439-4258-99f1-d2348bfbd8bf": Phase="Pending", Reason="", readiness=false. Elapsed: 310.777392ms
Aug 26 23:20:48.373: INFO: Pod "pod-secrets-302cc6bf-b439-4258-99f1-d2348bfbd8bf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.90610045s
Aug 26 23:20:50.672: INFO: Pod "pod-secrets-302cc6bf-b439-4258-99f1-d2348bfbd8bf": Phase="Pending", Reason="", readiness=false. Elapsed: 5.20550431s
Aug 26 23:20:52.682: INFO: Pod "pod-secrets-302cc6bf-b439-4258-99f1-d2348bfbd8bf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 7.215919869s
STEP: Saw pod success
Aug 26 23:20:52.682: INFO: Pod "pod-secrets-302cc6bf-b439-4258-99f1-d2348bfbd8bf" satisfied condition "success or failure"
Aug 26 23:20:52.685: INFO: Trying to get logs from node jerma-worker pod pod-secrets-302cc6bf-b439-4258-99f1-d2348bfbd8bf container secret-volume-test: 
STEP: delete the pod
Aug 26 23:20:53.082: INFO: Waiting for pod pod-secrets-302cc6bf-b439-4258-99f1-d2348bfbd8bf to disappear
Aug 26 23:20:53.136: INFO: Pod pod-secrets-302cc6bf-b439-4258-99f1-d2348bfbd8bf no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:20:53.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2065" for this suite.

• [SLOW TEST:7.922 seconds]
[sig-storage] Secrets
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":146,"skipped":2279,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:20:53.181: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Aug 26 23:20:57.516: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:20:57.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-8452" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":147,"skipped":2311,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:20:57.755: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 26 23:20:58.000: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3600517d-dcf0-4c9c-89df-7f4e9426e6d2" in namespace "downward-api-7729" to be "success or failure"
Aug 26 23:20:58.031: INFO: Pod "downwardapi-volume-3600517d-dcf0-4c9c-89df-7f4e9426e6d2": Phase="Pending", Reason="", readiness=false. Elapsed: 30.720333ms
Aug 26 23:21:00.167: INFO: Pod "downwardapi-volume-3600517d-dcf0-4c9c-89df-7f4e9426e6d2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.166562367s
Aug 26 23:21:02.190: INFO: Pod "downwardapi-volume-3600517d-dcf0-4c9c-89df-7f4e9426e6d2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.189879672s
Aug 26 23:21:04.372: INFO: Pod "downwardapi-volume-3600517d-dcf0-4c9c-89df-7f4e9426e6d2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.371149712s
STEP: Saw pod success
Aug 26 23:21:04.372: INFO: Pod "downwardapi-volume-3600517d-dcf0-4c9c-89df-7f4e9426e6d2" satisfied condition "success or failure"
Aug 26 23:21:04.447: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-3600517d-dcf0-4c9c-89df-7f4e9426e6d2 container client-container: 
STEP: delete the pod
Aug 26 23:21:04.589: INFO: Waiting for pod downwardapi-volume-3600517d-dcf0-4c9c-89df-7f4e9426e6d2 to disappear
Aug 26 23:21:04.603: INFO: Pod downwardapi-volume-3600517d-dcf0-4c9c-89df-7f4e9426e6d2 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:21:04.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7729" for this suite.

• [SLOW TEST:6.909 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":148,"skipped":2319,"failed":0}
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:21:04.665: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Aug 26 23:21:21.250: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-634 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 26 23:21:21.250: INFO: >>> kubeConfig: /root/.kube/config
I0826 23:21:21.292900       6 log.go:172] (0xc005be02c0) (0xc002c69220) Create stream
I0826 23:21:21.292933       6 log.go:172] (0xc005be02c0) (0xc002c69220) Stream added, broadcasting: 1
I0826 23:21:21.294420       6 log.go:172] (0xc005be02c0) Reply frame received for 1
I0826 23:21:21.294438       6 log.go:172] (0xc005be02c0) (0xc002c69360) Create stream
I0826 23:21:21.294446       6 log.go:172] (0xc005be02c0) (0xc002c69360) Stream added, broadcasting: 3
I0826 23:21:21.297200       6 log.go:172] (0xc005be02c0) Reply frame received for 3
I0826 23:21:21.297228       6 log.go:172] (0xc005be02c0) (0xc0010be140) Create stream
I0826 23:21:21.297241       6 log.go:172] (0xc005be02c0) (0xc0010be140) Stream added, broadcasting: 5
I0826 23:21:21.300011       6 log.go:172] (0xc005be02c0) Reply frame received for 5
I0826 23:21:21.384923       6 log.go:172] (0xc005be02c0) Data frame received for 5
I0826 23:21:21.384948       6 log.go:172] (0xc0010be140) (5) Data frame handling
I0826 23:21:21.384977       6 log.go:172] (0xc005be02c0) Data frame received for 3
I0826 23:21:21.385003       6 log.go:172] (0xc002c69360) (3) Data frame handling
I0826 23:21:21.385022       6 log.go:172] (0xc002c69360) (3) Data frame sent
I0826 23:21:21.385033       6 log.go:172] (0xc005be02c0) Data frame received for 3
I0826 23:21:21.385041       6 log.go:172] (0xc002c69360) (3) Data frame handling
I0826 23:21:21.386018       6 log.go:172] (0xc005be02c0) Data frame received for 1
I0826 23:21:21.386066       6 log.go:172] (0xc002c69220) (1) Data frame handling
I0826 23:21:21.386089       6 log.go:172] (0xc002c69220) (1) Data frame sent
I0826 23:21:21.386103       6 log.go:172] (0xc005be02c0) (0xc002c69220) Stream removed, broadcasting: 1
I0826 23:21:21.386119       6 log.go:172] (0xc005be02c0) Go away received
I0826 23:21:21.386199       6 log.go:172] (0xc005be02c0) (0xc002c69220) Stream removed, broadcasting: 1
I0826 23:21:21.386228       6 log.go:172] (0xc005be02c0) (0xc002c69360) Stream removed, broadcasting: 3
I0826 23:21:21.386246       6 log.go:172] (0xc005be02c0) (0xc0010be140) Stream removed, broadcasting: 5
Aug 26 23:21:21.386: INFO: Exec stderr: ""
Aug 26 23:21:21.386: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-634 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 26 23:21:21.386: INFO: >>> kubeConfig: /root/.kube/config
I0826 23:21:21.423738       6 log.go:172] (0xc00416c420) (0xc001f028c0) Create stream
I0826 23:21:21.423770       6 log.go:172] (0xc00416c420) (0xc001f028c0) Stream added, broadcasting: 1
I0826 23:21:21.425267       6 log.go:172] (0xc00416c420) Reply frame received for 1
I0826 23:21:21.425308       6 log.go:172] (0xc00416c420) (0xc0010be5a0) Create stream
I0826 23:21:21.425321       6 log.go:172] (0xc00416c420) (0xc0010be5a0) Stream added, broadcasting: 3
I0826 23:21:21.426217       6 log.go:172] (0xc00416c420) Reply frame received for 3
I0826 23:21:21.426253       6 log.go:172] (0xc00416c420) (0xc0003e59a0) Create stream
I0826 23:21:21.426268       6 log.go:172] (0xc00416c420) (0xc0003e59a0) Stream added, broadcasting: 5
I0826 23:21:21.427108       6 log.go:172] (0xc00416c420) Reply frame received for 5
I0826 23:21:21.513077       6 log.go:172] (0xc00416c420) Data frame received for 5
I0826 23:21:21.513105       6 log.go:172] (0xc0003e59a0) (5) Data frame handling
I0826 23:21:21.513121       6 log.go:172] (0xc00416c420) Data frame received for 3
I0826 23:21:21.513133       6 log.go:172] (0xc0010be5a0) (3) Data frame handling
I0826 23:21:21.513146       6 log.go:172] (0xc0010be5a0) (3) Data frame sent
I0826 23:21:21.513157       6 log.go:172] (0xc00416c420) Data frame received for 3
I0826 23:21:21.513164       6 log.go:172] (0xc0010be5a0) (3) Data frame handling
I0826 23:21:21.514036       6 log.go:172] (0xc00416c420) Data frame received for 1
I0826 23:21:21.514069       6 log.go:172] (0xc001f028c0) (1) Data frame handling
I0826 23:21:21.514093       6 log.go:172] (0xc001f028c0) (1) Data frame sent
I0826 23:21:21.514108       6 log.go:172] (0xc00416c420) (0xc001f028c0) Stream removed, broadcasting: 1
I0826 23:21:21.514125       6 log.go:172] (0xc00416c420) Go away received
I0826 23:21:21.514185       6 log.go:172] (0xc00416c420) (0xc001f028c0) Stream removed, broadcasting: 1
I0826 23:21:21.514201       6 log.go:172] (0xc00416c420) (0xc0010be5a0) Stream removed, broadcasting: 3
I0826 23:21:21.514214       6 log.go:172] (0xc00416c420) (0xc0003e59a0) Stream removed, broadcasting: 5
Aug 26 23:21:21.514: INFO: Exec stderr: ""
Aug 26 23:21:21.514: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-634 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 26 23:21:21.514: INFO: >>> kubeConfig: /root/.kube/config
I0826 23:21:21.615589       6 log.go:172] (0xc003166370) (0xc0010bf040) Create stream
I0826 23:21:21.615613       6 log.go:172] (0xc003166370) (0xc0010bf040) Stream added, broadcasting: 1
I0826 23:21:21.617111       6 log.go:172] (0xc003166370) Reply frame received for 1
I0826 23:21:21.617138       6 log.go:172] (0xc003166370) (0xc0010bf0e0) Create stream
I0826 23:21:21.617148       6 log.go:172] (0xc003166370) (0xc0010bf0e0) Stream added, broadcasting: 3
I0826 23:21:21.617856       6 log.go:172] (0xc003166370) Reply frame received for 3
I0826 23:21:21.617899       6 log.go:172] (0xc003166370) (0xc0010bf180) Create stream
I0826 23:21:21.617923       6 log.go:172] (0xc003166370) (0xc0010bf180) Stream added, broadcasting: 5
I0826 23:21:21.618552       6 log.go:172] (0xc003166370) Reply frame received for 5
I0826 23:21:21.687213       6 log.go:172] (0xc003166370) Data frame received for 3
I0826 23:21:21.687233       6 log.go:172] (0xc0010bf0e0) (3) Data frame handling
I0826 23:21:21.687244       6 log.go:172] (0xc0010bf0e0) (3) Data frame sent
I0826 23:21:21.687251       6 log.go:172] (0xc003166370) Data frame received for 3
I0826 23:21:21.687259       6 log.go:172] (0xc0010bf0e0) (3) Data frame handling
I0826 23:21:21.687406       6 log.go:172] (0xc003166370) Data frame received for 5
I0826 23:21:21.687416       6 log.go:172] (0xc0010bf180) (5) Data frame handling
I0826 23:21:21.688297       6 log.go:172] (0xc003166370) Data frame received for 1
I0826 23:21:21.688315       6 log.go:172] (0xc0010bf040) (1) Data frame handling
I0826 23:21:21.688330       6 log.go:172] (0xc0010bf040) (1) Data frame sent
I0826 23:21:21.688339       6 log.go:172] (0xc003166370) (0xc0010bf040) Stream removed, broadcasting: 1
I0826 23:21:21.688348       6 log.go:172] (0xc003166370) Go away received
I0826 23:21:21.688423       6 log.go:172] (0xc003166370) (0xc0010bf040) Stream removed, broadcasting: 1
I0826 23:21:21.688439       6 log.go:172] (0xc003166370) (0xc0010bf0e0) Stream removed, broadcasting: 3
I0826 23:21:21.688451       6 log.go:172] (0xc003166370) (0xc0010bf180) Stream removed, broadcasting: 5
Aug 26 23:21:21.688: INFO: Exec stderr: ""
Aug 26 23:21:21.688: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-634 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 26 23:21:21.688: INFO: >>> kubeConfig: /root/.kube/config
I0826 23:21:21.765651       6 log.go:172] (0xc0031669a0) (0xc0010bf9a0) Create stream
I0826 23:21:21.765684       6 log.go:172] (0xc0031669a0) (0xc0010bf9a0) Stream added, broadcasting: 1
I0826 23:21:21.772567       6 log.go:172] (0xc0031669a0) Reply frame received for 1
I0826 23:21:21.772614       6 log.go:172] (0xc0031669a0) (0xc0010bfa40) Create stream
I0826 23:21:21.772630       6 log.go:172] (0xc0031669a0) (0xc0010bfa40) Stream added, broadcasting: 3
I0826 23:21:21.773417       6 log.go:172] (0xc0031669a0) Reply frame received for 3
I0826 23:21:21.773460       6 log.go:172] (0xc0031669a0) (0xc002c694a0) Create stream
I0826 23:21:21.773473       6 log.go:172] (0xc0031669a0) (0xc002c694a0) Stream added, broadcasting: 5
I0826 23:21:21.774124       6 log.go:172] (0xc0031669a0) Reply frame received for 5
I0826 23:21:21.855529       6 log.go:172] (0xc0031669a0) Data frame received for 5
I0826 23:21:21.855550       6 log.go:172] (0xc002c694a0) (5) Data frame handling
I0826 23:21:21.855576       6 log.go:172] (0xc0031669a0) Data frame received for 3
I0826 23:21:21.855586       6 log.go:172] (0xc0010bfa40) (3) Data frame handling
I0826 23:21:21.855595       6 log.go:172] (0xc0010bfa40) (3) Data frame sent
I0826 23:21:21.855601       6 log.go:172] (0xc0031669a0) Data frame received for 3
I0826 23:21:21.855608       6 log.go:172] (0xc0010bfa40) (3) Data frame handling
I0826 23:21:21.856370       6 log.go:172] (0xc0031669a0) Data frame received for 1
I0826 23:21:21.856392       6 log.go:172] (0xc0010bf9a0) (1) Data frame handling
I0826 23:21:21.856403       6 log.go:172] (0xc0010bf9a0) (1) Data frame sent
I0826 23:21:21.856413       6 log.go:172] (0xc0031669a0) (0xc0010bf9a0) Stream removed, broadcasting: 1
I0826 23:21:21.856432       6 log.go:172] (0xc0031669a0) Go away received
I0826 23:21:21.856530       6 log.go:172] (0xc0031669a0) (0xc0010bf9a0) Stream removed, broadcasting: 1
I0826 23:21:21.856542       6 log.go:172] (0xc0031669a0) (0xc0010bfa40) Stream removed, broadcasting: 3
I0826 23:21:21.856552       6 log.go:172] (0xc0031669a0) (0xc002c694a0) Stream removed, broadcasting: 5
Aug 26 23:21:21.856: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Aug 26 23:21:21.856: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-634 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 26 23:21:21.856: INFO: >>> kubeConfig: /root/.kube/config
I0826 23:21:21.893051       6 log.go:172] (0xc00167e4d0) (0xc0003e5e00) Create stream
I0826 23:21:21.893094       6 log.go:172] (0xc00167e4d0) (0xc0003e5e00) Stream added, broadcasting: 1
I0826 23:21:21.894712       6 log.go:172] (0xc00167e4d0) Reply frame received for 1
I0826 23:21:21.894754       6 log.go:172] (0xc00167e4d0) (0xc001f02960) Create stream
I0826 23:21:21.894770       6 log.go:172] (0xc00167e4d0) (0xc001f02960) Stream added, broadcasting: 3
I0826 23:21:21.895600       6 log.go:172] (0xc00167e4d0) Reply frame received for 3
I0826 23:21:21.895633       6 log.go:172] (0xc00167e4d0) (0xc0010bfd60) Create stream
I0826 23:21:21.895645       6 log.go:172] (0xc00167e4d0) (0xc0010bfd60) Stream added, broadcasting: 5
I0826 23:21:21.896413       6 log.go:172] (0xc00167e4d0) Reply frame received for 5
I0826 23:21:21.965360       6 log.go:172] (0xc00167e4d0) Data frame received for 5
I0826 23:21:21.965391       6 log.go:172] (0xc0010bfd60) (5) Data frame handling
I0826 23:21:21.965409       6 log.go:172] (0xc00167e4d0) Data frame received for 3
I0826 23:21:21.965419       6 log.go:172] (0xc001f02960) (3) Data frame handling
I0826 23:21:21.965430       6 log.go:172] (0xc001f02960) (3) Data frame sent
I0826 23:21:21.965440       6 log.go:172] (0xc00167e4d0) Data frame received for 3
I0826 23:21:21.965448       6 log.go:172] (0xc001f02960) (3) Data frame handling
I0826 23:21:21.966278       6 log.go:172] (0xc00167e4d0) Data frame received for 1
I0826 23:21:21.966293       6 log.go:172] (0xc0003e5e00) (1) Data frame handling
I0826 23:21:21.966315       6 log.go:172] (0xc0003e5e00) (1) Data frame sent
I0826 23:21:21.966333       6 log.go:172] (0xc00167e4d0) (0xc0003e5e00) Stream removed, broadcasting: 1
I0826 23:21:21.966349       6 log.go:172] (0xc00167e4d0) Go away received
I0826 23:21:21.966471       6 log.go:172] (0xc00167e4d0) (0xc0003e5e00) Stream removed, broadcasting: 1
I0826 23:21:21.966489       6 log.go:172] (0xc00167e4d0) (0xc001f02960) Stream removed, broadcasting: 3
I0826 23:21:21.966505       6 log.go:172] (0xc00167e4d0) (0xc0010bfd60) Stream removed, broadcasting: 5
Aug 26 23:21:21.966: INFO: Exec stderr: ""
Aug 26 23:21:21.966: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-634 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 26 23:21:21.966: INFO: >>> kubeConfig: /root/.kube/config
I0826 23:21:22.013025       6 log.go:172] (0xc00416ca50) (0xc001f02e60) Create stream
I0826 23:21:22.013058       6 log.go:172] (0xc00416ca50) (0xc001f02e60) Stream added, broadcasting: 1
I0826 23:21:22.014455       6 log.go:172] (0xc00416ca50) Reply frame received for 1
I0826 23:21:22.014496       6 log.go:172] (0xc00416ca50) (0xc002c69680) Create stream
I0826 23:21:22.014510       6 log.go:172] (0xc00416ca50) (0xc002c69680) Stream added, broadcasting: 3
I0826 23:21:22.015132       6 log.go:172] (0xc00416ca50) Reply frame received for 3
I0826 23:21:22.015152       6 log.go:172] (0xc00416ca50) (0xc002806dc0) Create stream
I0826 23:21:22.015159       6 log.go:172] (0xc00416ca50) (0xc002806dc0) Stream added, broadcasting: 5
I0826 23:21:22.015750       6 log.go:172] (0xc00416ca50) Reply frame received for 5
I0826 23:21:22.086740       6 log.go:172] (0xc00416ca50) Data frame received for 5
I0826 23:21:22.086769       6 log.go:172] (0xc002806dc0) (5) Data frame handling
I0826 23:21:22.086811       6 log.go:172] (0xc00416ca50) Data frame received for 3
I0826 23:21:22.086848       6 log.go:172] (0xc002c69680) (3) Data frame handling
I0826 23:21:22.086865       6 log.go:172] (0xc002c69680) (3) Data frame sent
I0826 23:21:22.086880       6 log.go:172] (0xc00416ca50) Data frame received for 3
I0826 23:21:22.086887       6 log.go:172] (0xc002c69680) (3) Data frame handling
I0826 23:21:22.087828       6 log.go:172] (0xc00416ca50) Data frame received for 1
I0826 23:21:22.087839       6 log.go:172] (0xc001f02e60) (1) Data frame handling
I0826 23:21:22.087847       6 log.go:172] (0xc001f02e60) (1) Data frame sent
I0826 23:21:22.087859       6 log.go:172] (0xc00416ca50) (0xc001f02e60) Stream removed, broadcasting: 1
I0826 23:21:22.087871       6 log.go:172] (0xc00416ca50) Go away received
I0826 23:21:22.087944       6 log.go:172] (0xc00416ca50) (0xc001f02e60) Stream removed, broadcasting: 1
I0826 23:21:22.087955       6 log.go:172] (0xc00416ca50) (0xc002c69680) Stream removed, broadcasting: 3
I0826 23:21:22.087961       6 log.go:172] (0xc00416ca50) (0xc002806dc0) Stream removed, broadcasting: 5
Aug 26 23:21:22.087: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Aug 26 23:21:22.087: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-634 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 26 23:21:22.088: INFO: >>> kubeConfig: /root/.kube/config
I0826 23:21:22.152917       6 log.go:172] (0xc001eca630) (0xc002807040) Create stream
I0826 23:21:22.152953       6 log.go:172] (0xc001eca630) (0xc002807040) Stream added, broadcasting: 1
I0826 23:21:22.154532       6 log.go:172] (0xc001eca630) Reply frame received for 1
I0826 23:21:22.154568       6 log.go:172] (0xc001eca630) (0xc002c69720) Create stream
I0826 23:21:22.154577       6 log.go:172] (0xc001eca630) (0xc002c69720) Stream added, broadcasting: 3
I0826 23:21:22.155478       6 log.go:172] (0xc001eca630) Reply frame received for 3
I0826 23:21:22.155519       6 log.go:172] (0xc001eca630) (0xc001f02f00) Create stream
I0826 23:21:22.155531       6 log.go:172] (0xc001eca630) (0xc001f02f00) Stream added, broadcasting: 5
I0826 23:21:22.156357       6 log.go:172] (0xc001eca630) Reply frame received for 5
I0826 23:21:22.238719       6 log.go:172] (0xc001eca630) Data frame received for 3
I0826 23:21:22.238738       6 log.go:172] (0xc002c69720) (3) Data frame handling
I0826 23:21:22.238747       6 log.go:172] (0xc002c69720) (3) Data frame sent
I0826 23:21:22.238756       6 log.go:172] (0xc001eca630) Data frame received for 3
I0826 23:21:22.238761       6 log.go:172] (0xc002c69720) (3) Data frame handling
I0826 23:21:22.239672       6 log.go:172] (0xc001eca630) Data frame received for 5
I0826 23:21:22.239685       6 log.go:172] (0xc001f02f00) (5) Data frame handling
I0826 23:21:22.240893       6 log.go:172] (0xc001eca630) Data frame received for 1
I0826 23:21:22.240947       6 log.go:172] (0xc002807040) (1) Data frame handling
I0826 23:21:22.240974       6 log.go:172] (0xc002807040) (1) Data frame sent
I0826 23:21:22.240998       6 log.go:172] (0xc001eca630) (0xc002807040) Stream removed, broadcasting: 1
I0826 23:21:22.241019       6 log.go:172] (0xc001eca630) Go away received
I0826 23:21:22.241157       6 log.go:172] (0xc001eca630) (0xc002807040) Stream removed, broadcasting: 1
I0826 23:21:22.241176       6 log.go:172] (0xc001eca630) (0xc002c69720) Stream removed, broadcasting: 3
I0826 23:21:22.241185       6 log.go:172] (0xc001eca630) (0xc001f02f00) Stream removed, broadcasting: 5
Aug 26 23:21:22.241: INFO: Exec stderr: ""
Aug 26 23:21:22.241: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-634 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 26 23:21:22.241: INFO: >>> kubeConfig: /root/.kube/config
I0826 23:21:22.333811       6 log.go:172] (0xc003167080) (0xc000b646e0) Create stream
I0826 23:21:22.333837       6 log.go:172] (0xc003167080) (0xc000b646e0) Stream added, broadcasting: 1
I0826 23:21:22.335361       6 log.go:172] (0xc003167080) Reply frame received for 1
I0826 23:21:22.335388       6 log.go:172] (0xc003167080) (0xc002c69860) Create stream
I0826 23:21:22.335396       6 log.go:172] (0xc003167080) (0xc002c69860) Stream added, broadcasting: 3
I0826 23:21:22.335958       6 log.go:172] (0xc003167080) Reply frame received for 3
I0826 23:21:22.335979       6 log.go:172] (0xc003167080) (0xc002c699a0) Create stream
I0826 23:21:22.335987       6 log.go:172] (0xc003167080) (0xc002c699a0) Stream added, broadcasting: 5
I0826 23:21:22.336524       6 log.go:172] (0xc003167080) Reply frame received for 5
I0826 23:21:22.428479       6 log.go:172] (0xc003167080) Data frame received for 5
I0826 23:21:22.428507       6 log.go:172] (0xc002c699a0) (5) Data frame handling
I0826 23:21:22.428526       6 log.go:172] (0xc003167080) Data frame received for 3
I0826 23:21:22.428535       6 log.go:172] (0xc002c69860) (3) Data frame handling
I0826 23:21:22.428545       6 log.go:172] (0xc002c69860) (3) Data frame sent
I0826 23:21:22.428553       6 log.go:172] (0xc003167080) Data frame received for 3
I0826 23:21:22.428560       6 log.go:172] (0xc002c69860) (3) Data frame handling
I0826 23:21:22.433091       6 log.go:172] (0xc003167080) Data frame received for 1
I0826 23:21:22.433125       6 log.go:172] (0xc000b646e0) (1) Data frame handling
I0826 23:21:22.433145       6 log.go:172] (0xc000b646e0) (1) Data frame sent
I0826 23:21:22.433165       6 log.go:172] (0xc003167080) (0xc000b646e0) Stream removed, broadcasting: 1
I0826 23:21:22.433186       6 log.go:172] (0xc003167080) Go away received
I0826 23:21:22.433281       6 log.go:172] (0xc003167080) (0xc000b646e0) Stream removed, broadcasting: 1
I0826 23:21:22.433297       6 log.go:172] (0xc003167080) (0xc002c69860) Stream removed, broadcasting: 3
I0826 23:21:22.433309       6 log.go:172] (0xc003167080) (0xc002c699a0) Stream removed, broadcasting: 5
Aug 26 23:21:22.433: INFO: Exec stderr: ""
Aug 26 23:21:22.433: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-634 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 26 23:21:22.433: INFO: >>> kubeConfig: /root/.kube/config
I0826 23:21:22.698284       6 log.go:172] (0xc001ecad10) (0xc0028072c0) Create stream
I0826 23:21:22.698332       6 log.go:172] (0xc001ecad10) (0xc0028072c0) Stream added, broadcasting: 1
I0826 23:21:22.700114       6 log.go:172] (0xc001ecad10) Reply frame received for 1
I0826 23:21:22.700148       6 log.go:172] (0xc001ecad10) (0xc0003e5ea0) Create stream
I0826 23:21:22.700162       6 log.go:172] (0xc001ecad10) (0xc0003e5ea0) Stream added, broadcasting: 3
I0826 23:21:22.700950       6 log.go:172] (0xc001ecad10) Reply frame received for 3
I0826 23:21:22.700976       6 log.go:172] (0xc001ecad10) (0xc001f02fa0) Create stream
I0826 23:21:22.700985       6 log.go:172] (0xc001ecad10) (0xc001f02fa0) Stream added, broadcasting: 5
I0826 23:21:22.701620       6 log.go:172] (0xc001ecad10) Reply frame received for 5
I0826 23:21:22.771839       6 log.go:172] (0xc001ecad10) Data frame received for 5
I0826 23:21:22.771861       6 log.go:172] (0xc001f02fa0) (5) Data frame handling
I0826 23:21:22.771875       6 log.go:172] (0xc001ecad10) Data frame received for 3
I0826 23:21:22.771880       6 log.go:172] (0xc0003e5ea0) (3) Data frame handling
I0826 23:21:22.771888       6 log.go:172] (0xc0003e5ea0) (3) Data frame sent
I0826 23:21:22.771897       6 log.go:172] (0xc001ecad10) Data frame received for 3
I0826 23:21:22.771906       6 log.go:172] (0xc0003e5ea0) (3) Data frame handling
I0826 23:21:22.772834       6 log.go:172] (0xc001ecad10) Data frame received for 1
I0826 23:21:22.772845       6 log.go:172] (0xc0028072c0) (1) Data frame handling
I0826 23:21:22.772858       6 log.go:172] (0xc0028072c0) (1) Data frame sent
I0826 23:21:22.772871       6 log.go:172] (0xc001ecad10) (0xc0028072c0) Stream removed, broadcasting: 1
I0826 23:21:22.772883       6 log.go:172] (0xc001ecad10) Go away received
I0826 23:21:22.772989       6 log.go:172] (0xc001ecad10) (0xc0028072c0) Stream removed, broadcasting: 1
I0826 23:21:22.773003       6 log.go:172] (0xc001ecad10) (0xc0003e5ea0) Stream removed, broadcasting: 3
I0826 23:21:22.773013       6 log.go:172] (0xc001ecad10) (0xc001f02fa0) Stream removed, broadcasting: 5
Aug 26 23:21:22.773: INFO: Exec stderr: ""
Aug 26 23:21:22.773: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-634 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 26 23:21:22.773: INFO: >>> kubeConfig: /root/.kube/config
I0826 23:21:22.816379       6 log.go:172] (0xc00167eb00) (0xc001660640) Create stream
I0826 23:21:22.816413       6 log.go:172] (0xc00167eb00) (0xc001660640) Stream added, broadcasting: 1
I0826 23:21:22.818402       6 log.go:172] (0xc00167eb00) Reply frame received for 1
I0826 23:21:22.818446       6 log.go:172] (0xc00167eb00) (0xc002807360) Create stream
I0826 23:21:22.818457       6 log.go:172] (0xc00167eb00) (0xc002807360) Stream added, broadcasting: 3
I0826 23:21:22.819172       6 log.go:172] (0xc00167eb00) Reply frame received for 3
I0826 23:21:22.819215       6 log.go:172] (0xc00167eb00) (0xc0028074a0) Create stream
I0826 23:21:22.819233       6 log.go:172] (0xc00167eb00) (0xc0028074a0) Stream added, broadcasting: 5
I0826 23:21:22.819886       6 log.go:172] (0xc00167eb00) Reply frame received for 5
I0826 23:21:22.885161       6 log.go:172] (0xc00167eb00) Data frame received for 5
I0826 23:21:22.885221       6 log.go:172] (0xc0028074a0) (5) Data frame handling
I0826 23:21:22.885267       6 log.go:172] (0xc00167eb00) Data frame received for 3
I0826 23:21:22.885280       6 log.go:172] (0xc002807360) (3) Data frame handling
I0826 23:21:22.885301       6 log.go:172] (0xc002807360) (3) Data frame sent
I0826 23:21:22.885339       6 log.go:172] (0xc00167eb00) Data frame received for 3
I0826 23:21:22.885355       6 log.go:172] (0xc002807360) (3) Data frame handling
I0826 23:21:22.886483       6 log.go:172] (0xc00167eb00) Data frame received for 1
I0826 23:21:22.886501       6 log.go:172] (0xc001660640) (1) Data frame handling
I0826 23:21:22.886512       6 log.go:172] (0xc001660640) (1) Data frame sent
I0826 23:21:22.886526       6 log.go:172] (0xc00167eb00) (0xc001660640) Stream removed, broadcasting: 1
I0826 23:21:22.886537       6 log.go:172] (0xc00167eb00) Go away received
I0826 23:21:22.886655       6 log.go:172] (0xc00167eb00) (0xc001660640) Stream removed, broadcasting: 1
I0826 23:21:22.886686       6 log.go:172] (0xc00167eb00) (0xc002807360) Stream removed, broadcasting: 3
I0826 23:21:22.886697       6 log.go:172] (0xc00167eb00) (0xc0028074a0) Stream removed, broadcasting: 5
Aug 26 23:21:22.886: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:21:22.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-kubelet-etc-hosts-634" for this suite.

• [SLOW TEST:18.256 seconds]
[k8s.io] KubeletManagedEtcHosts
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":149,"skipped":2319,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  custom resource defaulting for requests and from storage works  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:21:22.921: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] custom resource defaulting for requests and from storage works  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 26 23:21:23.208: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:21:24.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-4444" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works  [Conformance]","total":278,"completed":150,"skipped":2324,"failed":0}
SSSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:21:24.251: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 26 23:21:24.311: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Aug 26 23:21:24.376: INFO: Pod name sample-pod: Found 0 pods out of 1
Aug 26 23:21:29.612: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Aug 26 23:21:29.612: INFO: Creating deployment "test-rolling-update-deployment"
Aug 26 23:21:29.617: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Aug 26 23:21:29.976: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Aug 26 23:21:32.311: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Aug 26 23:21:32.664: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734080890, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734080890, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734080890, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734080890, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 23:21:34.786: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734080890, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734080890, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734080890, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734080890, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 23:21:36.666: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Aug 26 23:21:36.674: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:{test-rolling-update-deployment  deployment-1991 /apis/apps/v1/namespaces/deployment-1991/deployments/test-rolling-update-deployment c4e57f31-4793-4916-9bb0-202b725af141 4039923 1 2020-08-26 23:21:29 +0000 UTC   map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00369bab8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-08-26 23:21:30 +0000 UTC,LastTransitionTime:2020-08-26 23:21:30 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-67cf4f6444" has successfully progressed.,LastUpdateTime:2020-08-26 23:21:36 +0000 UTC,LastTransitionTime:2020-08-26 23:21:30 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

Aug 26 23:21:36.677: INFO: New ReplicaSet "test-rolling-update-deployment-67cf4f6444" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:{test-rolling-update-deployment-67cf4f6444  deployment-1991 /apis/apps/v1/namespaces/deployment-1991/replicasets/test-rolling-update-deployment-67cf4f6444 c8bdd7d2-4fc8-4e39-879b-36786e01e30c 4039912 1 2020-08-26 23:21:30 +0000 UTC   map[name:sample-pod pod-template-hash:67cf4f6444] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment c4e57f31-4793-4916-9bb0-202b725af141 0xc00369bf47 0xc00369bf48}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 67cf4f6444,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod pod-template-hash:67cf4f6444] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00369bfb8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Aug 26 23:21:36.677: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Aug 26 23:21:36.678: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller  deployment-1991 /apis/apps/v1/namespaces/deployment-1991/replicasets/test-rolling-update-controller 3e069ac0-422f-48f0-ad57-505b91896f09 4039921 2 2020-08-26 23:21:24 +0000 UTC   map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment c4e57f31-4793-4916-9bb0-202b725af141 0xc00369be5f 0xc00369be70}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00369bed8  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Aug 26 23:21:36.681: INFO: Pod "test-rolling-update-deployment-67cf4f6444-pg9p2" is available:
&Pod{ObjectMeta:{test-rolling-update-deployment-67cf4f6444-pg9p2 test-rolling-update-deployment-67cf4f6444- deployment-1991 /api/v1/namespaces/deployment-1991/pods/test-rolling-update-deployment-67cf4f6444-pg9p2 ee858595-98c8-4dbc-8f8d-abb2be81b941 4039911 0 2020-08-26 23:21:30 +0000 UTC   map[name:sample-pod pod-template-hash:67cf4f6444] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-67cf4f6444 c8bdd7d2-4fc8-4e39-879b-36786e01e30c 0xc0037d2267 0xc0037d2268}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-46tp9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-46tp9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-46tp9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:21:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:21:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:21:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:21:30 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.2.2,StartTime:2020-08-26 23:21:30 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-26 23:21:34 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://742dd21343a147fdc1bd78d49d080ab5aa25f156708581f1205386ff77a03ed7,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.2,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:21:36.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-1991" for this suite.

• [SLOW TEST:12.438 seconds]
[sig-apps] Deployment
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":151,"skipped":2330,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:21:36.689: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 26 23:21:36.953: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5848'
Aug 26 23:21:37.248: INFO: stderr: ""
Aug 26 23:21:37.248: INFO: stdout: "replicationcontroller/agnhost-master created\n"
Aug 26 23:21:37.248: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5848'
Aug 26 23:21:37.521: INFO: stderr: ""
Aug 26 23:21:37.521: INFO: stdout: "service/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Aug 26 23:21:38.525: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 26 23:21:38.526: INFO: Found 0 / 1
Aug 26 23:21:39.532: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 26 23:21:39.532: INFO: Found 0 / 1
Aug 26 23:21:40.526: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 26 23:21:40.526: INFO: Found 1 / 1
Aug 26 23:21:40.526: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Aug 26 23:21:40.529: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 26 23:21:40.529: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Aug 26 23:21:40.529: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod agnhost-master-56q49 --namespace=kubectl-5848'
Aug 26 23:21:40.643: INFO: stderr: ""
Aug 26 23:21:40.643: INFO: stdout: "Name:         agnhost-master-56q49\nNamespace:    kubectl-5848\nPriority:     0\nNode:         jerma-worker2/172.18.0.3\nStart Time:   Wed, 26 Aug 2020 23:21:37 +0000\nLabels:       app=agnhost\n              role=master\nAnnotations:  \nStatus:       Running\nIP:           10.244.1.82\nIPs:\n  IP:           10.244.1.82\nControlled By:  ReplicationController/agnhost-master\nContainers:\n  agnhost-master:\n    Container ID:   containerd://d0b71976248d932982a09ad7f1fcd0accb7a1040d8b66f64299afd3145d0486f\n    Image:          gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n    Image ID:       gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Wed, 26 Aug 2020 23:21:40 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-cc9pk (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-cc9pk:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-cc9pk\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age        From                    Message\n  ----    ------     ----       ----                    -------\n  Normal  Scheduled    default-scheduler       Successfully assigned kubectl-5848/agnhost-master-56q49 to jerma-worker2\n  Normal  Pulled     2s         kubelet, jerma-worker2  Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\n  Normal  Created    1s         kubelet, jerma-worker2  Created container agnhost-master\n  Normal  Started    0s         kubelet, jerma-worker2  Started container agnhost-master\n"
Aug 26 23:21:40.643: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-5848'
Aug 26 23:21:40.769: INFO: stderr: ""
Aug 26 23:21:40.769: INFO: stdout: "Name:         agnhost-master\nNamespace:    kubectl-5848\nSelector:     app=agnhost,role=master\nLabels:       app=agnhost\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=agnhost\n           role=master\n  Containers:\n   agnhost-master:\n    Image:        gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  3s    replication-controller  Created pod: agnhost-master-56q49\n"
Aug 26 23:21:40.769: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-5848'
Aug 26 23:21:40.886: INFO: stderr: ""
Aug 26 23:21:40.886: INFO: stdout: "Name:              agnhost-master\nNamespace:         kubectl-5848\nLabels:            app=agnhost\n                   role=master\nAnnotations:       \nSelector:          app=agnhost,role=master\nType:              ClusterIP\nIP:                10.100.42.141\nPort:                6379/TCP\nTargetPort:        agnhost-server/TCP\nEndpoints:         10.244.1.82:6379\nSession Affinity:  None\nEvents:            \n"
Aug 26 23:21:40.889: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node jerma-control-plane'
Aug 26 23:21:41.012: INFO: stderr: ""
Aug 26 23:21:41.012: INFO: stdout: "Name:               jerma-control-plane\nRoles:              master\nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=jerma-control-plane\n                    kubernetes.io/os=linux\n                    node-role.kubernetes.io/master=\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Sat, 15 Aug 2020 09:37:06 +0000\nTaints:             node-role.kubernetes.io/master:NoSchedule\nUnschedulable:      false\nLease:\n  HolderIdentity:  jerma-control-plane\n  AcquireTime:     \n  RenewTime:       Wed, 26 Aug 2020 23:21:30 +0000\nConditions:\n  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----             ------  -----------------                 ------------------                ------                       -------\n  MemoryPressure   False   Wed, 26 Aug 2020 23:17:27 +0000   Sat, 15 Aug 2020 09:37:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure     False   Wed, 26 Aug 2020 23:17:27 +0000   Sat, 15 Aug 2020 09:37:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure      False   Wed, 26 Aug 2020 23:17:27 +0000   Sat, 15 Aug 2020 09:37:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready            True    Wed, 26 Aug 2020 23:17:27 +0000   Sat, 15 Aug 2020 09:37:40 +0000   KubeletReady                 kubelet is posting ready status\nAddresses:\n  InternalIP:  172.18.0.10\n  Hostname:    jerma-control-plane\nCapacity:\n  cpu:                16\n  ephemeral-storage:  2303189964Ki\n  hugepages-1Gi:      0\n  hugepages-2Mi:      0\n  memory:             131759872Ki\n  pods:               110\nAllocatable:\n  cpu:                16\n  ephemeral-storage:  2303189964Ki\n  hugepages-1Gi:      0\n  hugepages-2Mi:      0\n  memory:             131759872Ki\n  pods:               110\nSystem Info:\n  Machine ID:                 e52c45bc589d48d995e8fd79ad5bf250\n  System UUID:                b981bdc7-d264-48ef-ab5e-3308e23aaf13\n  Boot ID:                    11738d2d-5baa-4089-8e7f-2fb0329fce58\n  Kernel Version:             4.15.0-109-generic\n  OS Image:                   Ubuntu 19.10\n  Operating System:           linux\n  Architecture:               amd64\n  Container Runtime Version:  containerd://1.3.3-14-g449e9269\n  Kubelet Version:            v1.17.5\n  Kube-Proxy Version:         v1.17.5\nPodCIDR:                      10.244.0.0/24\nPodCIDRs:                     10.244.0.0/24\nNon-terminated Pods:          (9 in total)\n  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---\n  kube-system                 coredns-6955765f44-bvrm4                       100m (0%)     0 (0%)      70Mi (0%)        170Mi (0%)     11d\n  kube-system                 coredns-6955765f44-db8rh                       100m (0%)     0 (0%)      70Mi (0%)        170Mi (0%)     11d\n  kube-system                 etcd-jerma-control-plane                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         11d\n  kube-system                 kindnet-j88mt                                  100m (0%)     100m (0%)   50Mi (0%)        50Mi (0%)      11d\n  kube-system                 kube-apiserver-jerma-control-plane             250m (1%)     0 (0%)      0 (0%)           0 (0%)         11d\n  kube-system                 kube-controller-manager-jerma-control-plane    200m (1%)     0 (0%)      0 (0%)           0 (0%)         11d\n  kube-system                 kube-proxy-hmb6l                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         11d\n  kube-system                 kube-scheduler-jerma-control-plane             100m (0%)     0 (0%)      0 (0%)           0 (0%)         11d\n  local-path-storage          local-path-provisioner-58f6947c7-p2cqw         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11d\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests    Limits\n  --------           --------    ------\n  cpu                850m (5%)   100m (0%)\n  memory             190Mi (0%)  390Mi (0%)\n  ephemeral-storage  0 (0%)      0 (0%)\nEvents:              \n"
Aug 26 23:21:41.012: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-5848'
Aug 26 23:21:41.122: INFO: stderr: ""
Aug 26 23:21:41.122: INFO: stdout: "Name:         kubectl-5848\nLabels:       e2e-framework=kubectl\n              e2e-run=e62b3103-4f45-45d8-a479-4e9a2dda1ead\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nNo LimitRange resource.\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:21:41.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5848" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods  [Conformance]","total":278,"completed":152,"skipped":2342,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support proxy with --port 0  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:21:41.129: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should support proxy with --port 0  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: starting the proxy server
Aug 26 23:21:41.237: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:21:41.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1209" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0  [Conformance]","total":278,"completed":153,"skipped":2361,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:21:41.364: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0644 on tmpfs
Aug 26 23:21:41.506: INFO: Waiting up to 5m0s for pod "pod-d2cafe8e-43d5-4b75-9153-0b50d97c7df6" in namespace "emptydir-9430" to be "success or failure"
Aug 26 23:21:41.527: INFO: Pod "pod-d2cafe8e-43d5-4b75-9153-0b50d97c7df6": Phase="Pending", Reason="", readiness=false. Elapsed: 21.185066ms
Aug 26 23:21:44.098: INFO: Pod "pod-d2cafe8e-43d5-4b75-9153-0b50d97c7df6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.591847835s
Aug 26 23:21:46.101: INFO: Pod "pod-d2cafe8e-43d5-4b75-9153-0b50d97c7df6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.5954607s
Aug 26 23:21:48.105: INFO: Pod "pod-d2cafe8e-43d5-4b75-9153-0b50d97c7df6": Phase="Running", Reason="", readiness=true. Elapsed: 6.598789496s
Aug 26 23:21:50.109: INFO: Pod "pod-d2cafe8e-43d5-4b75-9153-0b50d97c7df6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.603187707s
STEP: Saw pod success
Aug 26 23:21:50.109: INFO: Pod "pod-d2cafe8e-43d5-4b75-9153-0b50d97c7df6" satisfied condition "success or failure"
Aug 26 23:21:50.111: INFO: Trying to get logs from node jerma-worker2 pod pod-d2cafe8e-43d5-4b75-9153-0b50d97c7df6 container test-container: 
STEP: delete the pod
Aug 26 23:21:50.246: INFO: Waiting for pod pod-d2cafe8e-43d5-4b75-9153-0b50d97c7df6 to disappear
Aug 26 23:21:50.258: INFO: Pod pod-d2cafe8e-43d5-4b75-9153-0b50d97c7df6 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:21:50.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9430" for this suite.

• [SLOW TEST:8.913 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":154,"skipped":2384,"failed":0}
SSSSSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:21:50.278: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Aug 26 23:21:57.010: INFO: Successfully updated pod "pod-update-d768cee6-a280-4db7-9cff-a4f2f2aa157e"
STEP: verifying the updated pod is in kubernetes
Aug 26 23:21:57.074: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:21:57.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-6344" for this suite.

• [SLOW TEST:6.805 seconds]
[k8s.io] Pods
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":278,"completed":155,"skipped":2391,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:21:57.083: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Aug 26 23:21:58.153: INFO: Pod name wrapped-volume-race-e7048e11-a2a3-4857-a5c0-efc8828d05c6: Found 0 pods out of 5
Aug 26 23:22:03.249: INFO: Pod name wrapped-volume-race-e7048e11-a2a3-4857-a5c0-efc8828d05c6: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-e7048e11-a2a3-4857-a5c0-efc8828d05c6 in namespace emptydir-wrapper-3844, will wait for the garbage collector to delete the pods
Aug 26 23:22:21.903: INFO: Deleting ReplicationController wrapped-volume-race-e7048e11-a2a3-4857-a5c0-efc8828d05c6 took: 560.984322ms
Aug 26 23:22:23.104: INFO: Terminating ReplicationController wrapped-volume-race-e7048e11-a2a3-4857-a5c0-efc8828d05c6 pods took: 1.200293105s
STEP: Creating RC which spawns configmap-volume pods
Aug 26 23:22:42.293: INFO: Pod name wrapped-volume-race-4c9e2db0-3b27-4962-b32b-435351d3abb8: Found 0 pods out of 5
Aug 26 23:22:47.479: INFO: Pod name wrapped-volume-race-4c9e2db0-3b27-4962-b32b-435351d3abb8: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-4c9e2db0-3b27-4962-b32b-435351d3abb8 in namespace emptydir-wrapper-3844, will wait for the garbage collector to delete the pods
Aug 26 23:23:05.773: INFO: Deleting ReplicationController wrapped-volume-race-4c9e2db0-3b27-4962-b32b-435351d3abb8 took: 6.766689ms
Aug 26 23:23:06.273: INFO: Terminating ReplicationController wrapped-volume-race-4c9e2db0-3b27-4962-b32b-435351d3abb8 pods took: 500.235962ms
STEP: Creating RC which spawns configmap-volume pods
Aug 26 23:23:23.071: INFO: Pod name wrapped-volume-race-5c395467-ff0a-41a4-bafa-e30e14d44d6b: Found 0 pods out of 5
Aug 26 23:23:28.081: INFO: Pod name wrapped-volume-race-5c395467-ff0a-41a4-bafa-e30e14d44d6b: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-5c395467-ff0a-41a4-bafa-e30e14d44d6b in namespace emptydir-wrapper-3844, will wait for the garbage collector to delete the pods
Aug 26 23:23:46.334: INFO: Deleting ReplicationController wrapped-volume-race-5c395467-ff0a-41a4-bafa-e30e14d44d6b took: 173.548894ms
Aug 26 23:23:47.535: INFO: Terminating ReplicationController wrapped-volume-race-5c395467-ff0a-41a4-bafa-e30e14d44d6b pods took: 1.200228339s
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:24:03.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-3844" for this suite.

• [SLOW TEST:126.329 seconds]
[sig-storage] EmptyDir wrapper volumes
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":278,"completed":156,"skipped":2455,"failed":0}
SSSSSSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:24:03.413: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 26 23:24:03.528: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:24:07.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5957" for this suite.
•{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":278,"completed":157,"skipped":2463,"failed":0}
SSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:24:07.582: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Aug 26 23:24:07.651: INFO: Waiting up to 5m0s for pod "downward-api-2602e957-29a9-448a-b00b-ad96090b706f" in namespace "downward-api-6963" to be "success or failure"
Aug 26 23:24:07.656: INFO: Pod "downward-api-2602e957-29a9-448a-b00b-ad96090b706f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.952934ms
Aug 26 23:24:09.791: INFO: Pod "downward-api-2602e957-29a9-448a-b00b-ad96090b706f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.139991311s
Aug 26 23:24:11.797: INFO: Pod "downward-api-2602e957-29a9-448a-b00b-ad96090b706f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.145493804s
STEP: Saw pod success
Aug 26 23:24:11.797: INFO: Pod "downward-api-2602e957-29a9-448a-b00b-ad96090b706f" satisfied condition "success or failure"
Aug 26 23:24:11.810: INFO: Trying to get logs from node jerma-worker pod downward-api-2602e957-29a9-448a-b00b-ad96090b706f container dapi-container: 
STEP: delete the pod
Aug 26 23:24:11.850: INFO: Waiting for pod downward-api-2602e957-29a9-448a-b00b-ad96090b706f to disappear
Aug 26 23:24:11.855: INFO: Pod downward-api-2602e957-29a9-448a-b00b-ad96090b706f no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:24:11.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6963" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":278,"completed":158,"skipped":2472,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:24:11.887: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test substitution in container's command
Aug 26 23:24:12.072: INFO: Waiting up to 5m0s for pod "var-expansion-238e0173-d5d5-42e4-9864-0df6bbce2bc8" in namespace "var-expansion-5870" to be "success or failure"
Aug 26 23:24:12.106: INFO: Pod "var-expansion-238e0173-d5d5-42e4-9864-0df6bbce2bc8": Phase="Pending", Reason="", readiness=false. Elapsed: 33.89045ms
Aug 26 23:24:14.241: INFO: Pod "var-expansion-238e0173-d5d5-42e4-9864-0df6bbce2bc8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.16840264s
Aug 26 23:24:16.244: INFO: Pod "var-expansion-238e0173-d5d5-42e4-9864-0df6bbce2bc8": Phase="Running", Reason="", readiness=true. Elapsed: 4.171907631s
Aug 26 23:24:18.248: INFO: Pod "var-expansion-238e0173-d5d5-42e4-9864-0df6bbce2bc8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.175489619s
STEP: Saw pod success
Aug 26 23:24:18.248: INFO: Pod "var-expansion-238e0173-d5d5-42e4-9864-0df6bbce2bc8" satisfied condition "success or failure"
Aug 26 23:24:18.276: INFO: Trying to get logs from node jerma-worker pod var-expansion-238e0173-d5d5-42e4-9864-0df6bbce2bc8 container dapi-container: 
STEP: delete the pod
Aug 26 23:24:18.351: INFO: Waiting for pod var-expansion-238e0173-d5d5-42e4-9864-0df6bbce2bc8 to disappear
Aug 26 23:24:18.367: INFO: Pod var-expansion-238e0173-d5d5-42e4-9864-0df6bbce2bc8 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:24:18.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-5870" for this suite.

• [SLOW TEST:6.488 seconds]
[k8s.io] Variable Expansion
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":278,"completed":159,"skipped":2513,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:24:18.376: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 26 23:24:18.495: INFO: Waiting up to 5m0s for pod "downwardapi-volume-defdaa3e-2522-48f4-b96d-0383dbab6ad0" in namespace "downward-api-1167" to be "success or failure"
Aug 26 23:24:18.499: INFO: Pod "downwardapi-volume-defdaa3e-2522-48f4-b96d-0383dbab6ad0": Phase="Pending", Reason="", readiness=false. Elapsed: 3.537191ms
Aug 26 23:24:20.502: INFO: Pod "downwardapi-volume-defdaa3e-2522-48f4-b96d-0383dbab6ad0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006904117s
Aug 26 23:24:22.506: INFO: Pod "downwardapi-volume-defdaa3e-2522-48f4-b96d-0383dbab6ad0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010941863s
STEP: Saw pod success
Aug 26 23:24:22.506: INFO: Pod "downwardapi-volume-defdaa3e-2522-48f4-b96d-0383dbab6ad0" satisfied condition "success or failure"
Aug 26 23:24:22.509: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-defdaa3e-2522-48f4-b96d-0383dbab6ad0 container client-container: 
STEP: delete the pod
Aug 26 23:24:22.870: INFO: Waiting for pod downwardapi-volume-defdaa3e-2522-48f4-b96d-0383dbab6ad0 to disappear
Aug 26 23:24:22.876: INFO: Pod downwardapi-volume-defdaa3e-2522-48f4-b96d-0383dbab6ad0 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:24:22.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1167" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":160,"skipped":2553,"failed":0}
SSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:24:22.883: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Aug 26 23:24:23.053: INFO: Waiting up to 5m0s for pod "downward-api-0cd3c457-fe4d-40d7-98eb-a62020318e9a" in namespace "downward-api-7439" to be "success or failure"
Aug 26 23:24:23.081: INFO: Pod "downward-api-0cd3c457-fe4d-40d7-98eb-a62020318e9a": Phase="Pending", Reason="", readiness=false. Elapsed: 28.279057ms
Aug 26 23:24:25.084: INFO: Pod "downward-api-0cd3c457-fe4d-40d7-98eb-a62020318e9a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031482404s
Aug 26 23:24:27.089: INFO: Pod "downward-api-0cd3c457-fe4d-40d7-98eb-a62020318e9a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036212028s
Aug 26 23:24:29.092: INFO: Pod "downward-api-0cd3c457-fe4d-40d7-98eb-a62020318e9a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.03977239s
STEP: Saw pod success
Aug 26 23:24:29.092: INFO: Pod "downward-api-0cd3c457-fe4d-40d7-98eb-a62020318e9a" satisfied condition "success or failure"
Aug 26 23:24:29.094: INFO: Trying to get logs from node jerma-worker2 pod downward-api-0cd3c457-fe4d-40d7-98eb-a62020318e9a container dapi-container: 
STEP: delete the pod
Aug 26 23:24:29.230: INFO: Waiting for pod downward-api-0cd3c457-fe4d-40d7-98eb-a62020318e9a to disappear
Aug 26 23:24:29.252: INFO: Pod downward-api-0cd3c457-fe4d-40d7-98eb-a62020318e9a no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:24:29.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7439" for this suite.

• [SLOW TEST:6.376 seconds]
[sig-node] Downward API
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":278,"completed":161,"skipped":2557,"failed":0}
S
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:24:29.259: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-f6d4abe9-7a1a-4fea-8a80-3e8433161f7a
STEP: Creating a pod to test consume secrets
Aug 26 23:24:29.724: INFO: Waiting up to 5m0s for pod "pod-secrets-bb8e3faa-f592-49c6-b3cd-74a12626b773" in namespace "secrets-7723" to be "success or failure"
Aug 26 23:24:29.760: INFO: Pod "pod-secrets-bb8e3faa-f592-49c6-b3cd-74a12626b773": Phase="Pending", Reason="", readiness=false. Elapsed: 36.096672ms
Aug 26 23:24:31.764: INFO: Pod "pod-secrets-bb8e3faa-f592-49c6-b3cd-74a12626b773": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03976119s
Aug 26 23:24:33.768: INFO: Pod "pod-secrets-bb8e3faa-f592-49c6-b3cd-74a12626b773": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044002245s
Aug 26 23:24:35.772: INFO: Pod "pod-secrets-bb8e3faa-f592-49c6-b3cd-74a12626b773": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.048007134s
STEP: Saw pod success
Aug 26 23:24:35.772: INFO: Pod "pod-secrets-bb8e3faa-f592-49c6-b3cd-74a12626b773" satisfied condition "success or failure"
Aug 26 23:24:35.775: INFO: Trying to get logs from node jerma-worker pod pod-secrets-bb8e3faa-f592-49c6-b3cd-74a12626b773 container secret-volume-test: 
STEP: delete the pod
Aug 26 23:24:35.800: INFO: Waiting for pod pod-secrets-bb8e3faa-f592-49c6-b3cd-74a12626b773 to disappear
Aug 26 23:24:35.826: INFO: Pod pod-secrets-bb8e3faa-f592-49c6-b3cd-74a12626b773 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:24:35.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7723" for this suite.
STEP: Destroying namespace "secret-namespace-6483" for this suite.

• [SLOW TEST:6.583 seconds]
[sig-storage] Secrets
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":278,"completed":162,"skipped":2558,"failed":0}
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:24:35.842: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0644 on tmpfs
Aug 26 23:24:35.945: INFO: Waiting up to 5m0s for pod "pod-3a58c57e-a71e-4d90-bb72-85be14427b3d" in namespace "emptydir-9099" to be "success or failure"
Aug 26 23:24:35.967: INFO: Pod "pod-3a58c57e-a71e-4d90-bb72-85be14427b3d": Phase="Pending", Reason="", readiness=false. Elapsed: 21.698391ms
Aug 26 23:24:37.971: INFO: Pod "pod-3a58c57e-a71e-4d90-bb72-85be14427b3d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025838136s
Aug 26 23:24:39.975: INFO: Pod "pod-3a58c57e-a71e-4d90-bb72-85be14427b3d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029834185s
STEP: Saw pod success
Aug 26 23:24:39.975: INFO: Pod "pod-3a58c57e-a71e-4d90-bb72-85be14427b3d" satisfied condition "success or failure"
Aug 26 23:24:39.978: INFO: Trying to get logs from node jerma-worker pod pod-3a58c57e-a71e-4d90-bb72-85be14427b3d container test-container: 
STEP: delete the pod
Aug 26 23:24:40.047: INFO: Waiting for pod pod-3a58c57e-a71e-4d90-bb72-85be14427b3d to disappear
Aug 26 23:24:40.096: INFO: Pod pod-3a58c57e-a71e-4d90-bb72-85be14427b3d no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:24:40.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9099" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":163,"skipped":2561,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:24:40.103: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should receive events on concurrent watches in same order [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:24:45.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-769" for this suite.

• [SLOW TEST:5.458 seconds]
[sig-api-machinery] Watchers
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should receive events on concurrent watches in same order [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":278,"completed":164,"skipped":2568,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:24:45.561: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-9604.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-9604.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9604.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-9604.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-9604.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9604.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe /etc/hosts
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 26 23:24:51.738: INFO: DNS probes using dns-9604/dns-test-3d3c699b-a75e-4f18-901e-0732d5b92093 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:24:51.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-9604" for this suite.

• [SLOW TEST:6.482 seconds]
[sig-network] DNS
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":278,"completed":165,"skipped":2579,"failed":0}
SSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a validating webhook should work [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:24:52.044: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 26 23:24:52.861: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 26 23:24:54.871: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734081092, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734081092, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734081092, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734081092, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 26 23:24:57.904: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] patching/updating a validating webhook should work [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a validating webhook configuration
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Updating a validating webhook configuration's rules to not include the create operation
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Patching a validating webhook configuration's rules to include the create operation
STEP: Creating a configMap that does not comply to the validation webhook rules
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:24:58.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5586" for this suite.
STEP: Destroying namespace "webhook-5586-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:6.571 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a validating webhook should work [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":278,"completed":166,"skipped":2587,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] version v1
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:24:58.615: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 26 23:24:58.922: INFO: (0) /api/v1/nodes/jerma-worker/proxy/logs/: 
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-map-54e31240-1fb8-4d37-8ec1-6c5b5d0be7d1
STEP: Creating a pod to test consume secrets
Aug 26 23:24:59.396: INFO: Waiting up to 5m0s for pod "pod-secrets-e538cb01-247c-4309-81f3-31a7225be467" in namespace "secrets-4445" to be "success or failure"
Aug 26 23:24:59.400: INFO: Pod "pod-secrets-e538cb01-247c-4309-81f3-31a7225be467": Phase="Pending", Reason="", readiness=false. Elapsed: 3.23154ms
Aug 26 23:25:01.404: INFO: Pod "pod-secrets-e538cb01-247c-4309-81f3-31a7225be467": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007796634s
Aug 26 23:25:03.407: INFO: Pod "pod-secrets-e538cb01-247c-4309-81f3-31a7225be467": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011000192s
STEP: Saw pod success
Aug 26 23:25:03.407: INFO: Pod "pod-secrets-e538cb01-247c-4309-81f3-31a7225be467" satisfied condition "success or failure"
Aug 26 23:25:03.410: INFO: Trying to get logs from node jerma-worker pod pod-secrets-e538cb01-247c-4309-81f3-31a7225be467 container secret-volume-test: 
STEP: delete the pod
Aug 26 23:25:03.427: INFO: Waiting for pod pod-secrets-e538cb01-247c-4309-81f3-31a7225be467 to disappear
Aug 26 23:25:03.432: INFO: Pod pod-secrets-e538cb01-247c-4309-81f3-31a7225be467 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:25:03.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4445" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":168,"skipped":2660,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:25:03.465: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
Aug 26 23:25:03.675: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:25:13.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-3621" for this suite.

• [SLOW TEST:9.860 seconds]
[k8s.io] InitContainer [NodeConformance]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should invoke init containers on a RestartAlways pod [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":278,"completed":169,"skipped":2690,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Subdomain [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:25:13.325: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for pods for Subdomain [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-2393.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-2393.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-2393.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2393.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-2393.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-2393.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-2393.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-2393.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2393.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-2393.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-2393.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-2393.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-2393.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-2393.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-2393.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-2393.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-2393.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2393.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 26 23:25:23.772: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2393.svc.cluster.local from pod dns-2393/dns-test-5abc94a3-81e9-40c8-970a-6c0b18bb6e9a: the server could not find the requested resource (get pods dns-test-5abc94a3-81e9-40c8-970a-6c0b18bb6e9a)
Aug 26 23:25:23.775: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2393.svc.cluster.local from pod dns-2393/dns-test-5abc94a3-81e9-40c8-970a-6c0b18bb6e9a: the server could not find the requested resource (get pods dns-test-5abc94a3-81e9-40c8-970a-6c0b18bb6e9a)
Aug 26 23:25:23.779: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2393.svc.cluster.local from pod dns-2393/dns-test-5abc94a3-81e9-40c8-970a-6c0b18bb6e9a: the server could not find the requested resource (get pods dns-test-5abc94a3-81e9-40c8-970a-6c0b18bb6e9a)
Aug 26 23:25:23.783: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2393.svc.cluster.local from pod dns-2393/dns-test-5abc94a3-81e9-40c8-970a-6c0b18bb6e9a: the server could not find the requested resource (get pods dns-test-5abc94a3-81e9-40c8-970a-6c0b18bb6e9a)
Aug 26 23:25:23.793: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2393.svc.cluster.local from pod dns-2393/dns-test-5abc94a3-81e9-40c8-970a-6c0b18bb6e9a: the server could not find the requested resource (get pods dns-test-5abc94a3-81e9-40c8-970a-6c0b18bb6e9a)
Aug 26 23:25:23.797: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2393.svc.cluster.local from pod dns-2393/dns-test-5abc94a3-81e9-40c8-970a-6c0b18bb6e9a: the server could not find the requested resource (get pods dns-test-5abc94a3-81e9-40c8-970a-6c0b18bb6e9a)
Aug 26 23:25:23.800: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2393.svc.cluster.local from pod dns-2393/dns-test-5abc94a3-81e9-40c8-970a-6c0b18bb6e9a: the server could not find the requested resource (get pods dns-test-5abc94a3-81e9-40c8-970a-6c0b18bb6e9a)
Aug 26 23:25:23.803: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2393.svc.cluster.local from pod dns-2393/dns-test-5abc94a3-81e9-40c8-970a-6c0b18bb6e9a: the server could not find the requested resource (get pods dns-test-5abc94a3-81e9-40c8-970a-6c0b18bb6e9a)
Aug 26 23:25:23.808: INFO: Lookups using dns-2393/dns-test-5abc94a3-81e9-40c8-970a-6c0b18bb6e9a failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2393.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2393.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2393.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2393.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2393.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2393.svc.cluster.local jessie_udp@dns-test-service-2.dns-2393.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2393.svc.cluster.local]

Aug 26 23:25:28.812: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2393.svc.cluster.local from pod dns-2393/dns-test-5abc94a3-81e9-40c8-970a-6c0b18bb6e9a: the server could not find the requested resource (get pods dns-test-5abc94a3-81e9-40c8-970a-6c0b18bb6e9a)
Aug 26 23:25:28.815: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2393.svc.cluster.local from pod dns-2393/dns-test-5abc94a3-81e9-40c8-970a-6c0b18bb6e9a: the server could not find the requested resource (get pods dns-test-5abc94a3-81e9-40c8-970a-6c0b18bb6e9a)
Aug 26 23:25:28.818: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2393.svc.cluster.local from pod dns-2393/dns-test-5abc94a3-81e9-40c8-970a-6c0b18bb6e9a: the server could not find the requested resource (get pods dns-test-5abc94a3-81e9-40c8-970a-6c0b18bb6e9a)
Aug 26 23:25:28.821: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2393.svc.cluster.local from pod dns-2393/dns-test-5abc94a3-81e9-40c8-970a-6c0b18bb6e9a: the server could not find the requested resource (get pods dns-test-5abc94a3-81e9-40c8-970a-6c0b18bb6e9a)
Aug 26 23:25:28.828: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2393.svc.cluster.local from pod dns-2393/dns-test-5abc94a3-81e9-40c8-970a-6c0b18bb6e9a: the server could not find the requested resource (get pods dns-test-5abc94a3-81e9-40c8-970a-6c0b18bb6e9a)
Aug 26 23:25:28.830: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2393.svc.cluster.local from pod dns-2393/dns-test-5abc94a3-81e9-40c8-970a-6c0b18bb6e9a: the server could not find the requested resource (get pods dns-test-5abc94a3-81e9-40c8-970a-6c0b18bb6e9a)
Aug 26 23:25:28.833: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2393.svc.cluster.local from pod dns-2393/dns-test-5abc94a3-81e9-40c8-970a-6c0b18bb6e9a: the server could not find the requested resource (get pods dns-test-5abc94a3-81e9-40c8-970a-6c0b18bb6e9a)
Aug 26 23:25:28.836: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2393.svc.cluster.local from pod dns-2393/dns-test-5abc94a3-81e9-40c8-970a-6c0b18bb6e9a: the server could not find the requested resource (get pods dns-test-5abc94a3-81e9-40c8-970a-6c0b18bb6e9a)
Aug 26 23:25:28.841: INFO: Lookups using dns-2393/dns-test-5abc94a3-81e9-40c8-970a-6c0b18bb6e9a failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2393.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2393.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2393.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2393.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2393.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2393.svc.cluster.local jessie_udp@dns-test-service-2.dns-2393.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2393.svc.cluster.local]

Aug 26 23:25:33.817: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2393.svc.cluster.local from pod dns-2393/dns-test-5abc94a3-81e9-40c8-970a-6c0b18bb6e9a: the server could not find the requested resource (get pods dns-test-5abc94a3-81e9-40c8-970a-6c0b18bb6e9a)
Aug 26 23:25:33.821: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2393.svc.cluster.local from pod dns-2393/dns-test-5abc94a3-81e9-40c8-970a-6c0b18bb6e9a: the server could not find the requested resource (get pods dns-test-5abc94a3-81e9-40c8-970a-6c0b18bb6e9a)
Aug 26 23:25:33.823: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2393.svc.cluster.local from pod dns-2393/dns-test-5abc94a3-81e9-40c8-970a-6c0b18bb6e9a: the server could not find the requested resource (get pods dns-test-5abc94a3-81e9-40c8-970a-6c0b18bb6e9a)
Aug 26 23:25:33.826: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2393.svc.cluster.local from pod dns-2393/dns-test-5abc94a3-81e9-40c8-970a-6c0b18bb6e9a: the server could not find the requested resource (get pods dns-test-5abc94a3-81e9-40c8-970a-6c0b18bb6e9a)
Aug 26 23:25:33.833: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2393.svc.cluster.local from pod dns-2393/dns-test-5abc94a3-81e9-40c8-970a-6c0b18bb6e9a: the server could not find the requested resource (get pods dns-test-5abc94a3-81e9-40c8-970a-6c0b18bb6e9a)
Aug 26 23:25:33.836: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2393.svc.cluster.local from pod dns-2393/dns-test-5abc94a3-81e9-40c8-970a-6c0b18bb6e9a: the server could not find the requested resource (get pods dns-test-5abc94a3-81e9-40c8-970a-6c0b18bb6e9a)
Aug 26 23:25:33.839: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2393.svc.cluster.local from pod dns-2393/dns-test-5abc94a3-81e9-40c8-970a-6c0b18bb6e9a: the server could not find the requested resource (get pods dns-test-5abc94a3-81e9-40c8-970a-6c0b18bb6e9a)
Aug 26 23:25:33.841: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2393.svc.cluster.local from pod dns-2393/dns-test-5abc94a3-81e9-40c8-970a-6c0b18bb6e9a: the server could not find the requested resource (get pods dns-test-5abc94a3-81e9-40c8-970a-6c0b18bb6e9a)
Aug 26 23:25:33.846: INFO: Lookups using dns-2393/dns-test-5abc94a3-81e9-40c8-970a-6c0b18bb6e9a failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2393.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2393.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2393.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2393.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2393.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2393.svc.cluster.local jessie_udp@dns-test-service-2.dns-2393.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2393.svc.cluster.local]

Aug 26 23:25:38.824: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2393.svc.cluster.local from pod dns-2393/dns-test-5abc94a3-81e9-40c8-970a-6c0b18bb6e9a: the server could not find the requested resource (get pods dns-test-5abc94a3-81e9-40c8-970a-6c0b18bb6e9a)
Aug 26 23:25:38.860: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2393.svc.cluster.local from pod dns-2393/dns-test-5abc94a3-81e9-40c8-970a-6c0b18bb6e9a: the server could not find the requested resource (get pods dns-test-5abc94a3-81e9-40c8-970a-6c0b18bb6e9a)
Aug 26 23:25:38.863: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2393.svc.cluster.local from pod dns-2393/dns-test-5abc94a3-81e9-40c8-970a-6c0b18bb6e9a: the server could not find the requested resource (get pods dns-test-5abc94a3-81e9-40c8-970a-6c0b18bb6e9a)
Aug 26 23:25:38.865: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2393.svc.cluster.local from pod dns-2393/dns-test-5abc94a3-81e9-40c8-970a-6c0b18bb6e9a: the server could not find the requested resource (get pods dns-test-5abc94a3-81e9-40c8-970a-6c0b18bb6e9a)
Aug 26 23:25:38.874: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2393.svc.cluster.local from pod dns-2393/dns-test-5abc94a3-81e9-40c8-970a-6c0b18bb6e9a: the server could not find the requested resource (get pods dns-test-5abc94a3-81e9-40c8-970a-6c0b18bb6e9a)
Aug 26 23:25:38.877: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2393.svc.cluster.local from pod dns-2393/dns-test-5abc94a3-81e9-40c8-970a-6c0b18bb6e9a: the server could not find the requested resource (get pods dns-test-5abc94a3-81e9-40c8-970a-6c0b18bb6e9a)
Aug 26 23:25:38.880: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2393.svc.cluster.local from pod dns-2393/dns-test-5abc94a3-81e9-40c8-970a-6c0b18bb6e9a: the server could not find the requested resource (get pods dns-test-5abc94a3-81e9-40c8-970a-6c0b18bb6e9a)
Aug 26 23:25:38.900: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2393.svc.cluster.local from pod dns-2393/dns-test-5abc94a3-81e9-40c8-970a-6c0b18bb6e9a: the server could not find the requested resource (get pods dns-test-5abc94a3-81e9-40c8-970a-6c0b18bb6e9a)
Aug 26 23:25:38.906: INFO: Lookups using dns-2393/dns-test-5abc94a3-81e9-40c8-970a-6c0b18bb6e9a failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2393.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2393.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2393.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2393.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2393.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2393.svc.cluster.local jessie_udp@dns-test-service-2.dns-2393.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2393.svc.cluster.local]

Aug 26 23:25:43.812: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2393.svc.cluster.local from pod dns-2393/dns-test-5abc94a3-81e9-40c8-970a-6c0b18bb6e9a: the server could not find the requested resource (get pods dns-test-5abc94a3-81e9-40c8-970a-6c0b18bb6e9a)
Aug 26 23:25:43.815: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2393.svc.cluster.local from pod dns-2393/dns-test-5abc94a3-81e9-40c8-970a-6c0b18bb6e9a: the server could not find the requested resource (get pods dns-test-5abc94a3-81e9-40c8-970a-6c0b18bb6e9a)
Aug 26 23:25:43.818: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2393.svc.cluster.local from pod dns-2393/dns-test-5abc94a3-81e9-40c8-970a-6c0b18bb6e9a: the server could not find the requested resource (get pods dns-test-5abc94a3-81e9-40c8-970a-6c0b18bb6e9a)
Aug 26 23:25:43.821: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2393.svc.cluster.local from pod dns-2393/dns-test-5abc94a3-81e9-40c8-970a-6c0b18bb6e9a: the server could not find the requested resource (get pods dns-test-5abc94a3-81e9-40c8-970a-6c0b18bb6e9a)
Aug 26 23:25:43.830: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2393.svc.cluster.local from pod dns-2393/dns-test-5abc94a3-81e9-40c8-970a-6c0b18bb6e9a: the server could not find the requested resource (get pods dns-test-5abc94a3-81e9-40c8-970a-6c0b18bb6e9a)
Aug 26 23:25:43.833: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2393.svc.cluster.local from pod dns-2393/dns-test-5abc94a3-81e9-40c8-970a-6c0b18bb6e9a: the server could not find the requested resource (get pods dns-test-5abc94a3-81e9-40c8-970a-6c0b18bb6e9a)
Aug 26 23:25:43.835: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2393.svc.cluster.local from pod dns-2393/dns-test-5abc94a3-81e9-40c8-970a-6c0b18bb6e9a: the server could not find the requested resource (get pods dns-test-5abc94a3-81e9-40c8-970a-6c0b18bb6e9a)
Aug 26 23:25:43.838: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2393.svc.cluster.local from pod dns-2393/dns-test-5abc94a3-81e9-40c8-970a-6c0b18bb6e9a: the server could not find the requested resource (get pods dns-test-5abc94a3-81e9-40c8-970a-6c0b18bb6e9a)
Aug 26 23:25:43.843: INFO: Lookups using dns-2393/dns-test-5abc94a3-81e9-40c8-970a-6c0b18bb6e9a failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2393.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2393.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2393.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2393.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2393.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2393.svc.cluster.local jessie_udp@dns-test-service-2.dns-2393.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2393.svc.cluster.local]

Aug 26 23:25:48.813: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2393.svc.cluster.local from pod dns-2393/dns-test-5abc94a3-81e9-40c8-970a-6c0b18bb6e9a: the server could not find the requested resource (get pods dns-test-5abc94a3-81e9-40c8-970a-6c0b18bb6e9a)
Aug 26 23:25:48.817: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2393.svc.cluster.local from pod dns-2393/dns-test-5abc94a3-81e9-40c8-970a-6c0b18bb6e9a: the server could not find the requested resource (get pods dns-test-5abc94a3-81e9-40c8-970a-6c0b18bb6e9a)
Aug 26 23:25:48.820: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2393.svc.cluster.local from pod dns-2393/dns-test-5abc94a3-81e9-40c8-970a-6c0b18bb6e9a: the server could not find the requested resource (get pods dns-test-5abc94a3-81e9-40c8-970a-6c0b18bb6e9a)
Aug 26 23:25:48.822: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2393.svc.cluster.local from pod dns-2393/dns-test-5abc94a3-81e9-40c8-970a-6c0b18bb6e9a: the server could not find the requested resource (get pods dns-test-5abc94a3-81e9-40c8-970a-6c0b18bb6e9a)
Aug 26 23:25:48.831: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2393.svc.cluster.local from pod dns-2393/dns-test-5abc94a3-81e9-40c8-970a-6c0b18bb6e9a: the server could not find the requested resource (get pods dns-test-5abc94a3-81e9-40c8-970a-6c0b18bb6e9a)
Aug 26 23:25:48.833: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2393.svc.cluster.local from pod dns-2393/dns-test-5abc94a3-81e9-40c8-970a-6c0b18bb6e9a: the server could not find the requested resource (get pods dns-test-5abc94a3-81e9-40c8-970a-6c0b18bb6e9a)
Aug 26 23:25:48.836: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2393.svc.cluster.local from pod dns-2393/dns-test-5abc94a3-81e9-40c8-970a-6c0b18bb6e9a: the server could not find the requested resource (get pods dns-test-5abc94a3-81e9-40c8-970a-6c0b18bb6e9a)
Aug 26 23:25:48.838: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2393.svc.cluster.local from pod dns-2393/dns-test-5abc94a3-81e9-40c8-970a-6c0b18bb6e9a: the server could not find the requested resource (get pods dns-test-5abc94a3-81e9-40c8-970a-6c0b18bb6e9a)
Aug 26 23:25:48.843: INFO: Lookups using dns-2393/dns-test-5abc94a3-81e9-40c8-970a-6c0b18bb6e9a failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2393.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2393.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2393.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2393.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2393.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2393.svc.cluster.local jessie_udp@dns-test-service-2.dns-2393.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2393.svc.cluster.local]

Aug 26 23:25:53.845: INFO: DNS probes using dns-2393/dns-test-5abc94a3-81e9-40c8-970a-6c0b18bb6e9a succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:25:54.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-2393" for this suite.

• [SLOW TEST:41.288 seconds]
[sig-network] DNS
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for pods for Subdomain [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":278,"completed":170,"skipped":2704,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:25:54.614: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should check if v1 is in available api versions  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: validating api versions
Aug 26 23:25:54.721: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Aug 26 23:25:54.957: INFO: stderr: ""
Aug 26 23:25:54.957: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:25:54.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7957" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","total":278,"completed":171,"skipped":2726,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:25:54.965: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-map-8f1edbb6-45f3-4d0d-9065-9ed8e81ee6eb
STEP: Creating a pod to test consume configMaps
Aug 26 23:25:55.037: INFO: Waiting up to 5m0s for pod "pod-configmaps-ba2bbe0f-cb84-4288-8b63-493712ab2f8f" in namespace "configmap-6103" to be "success or failure"
Aug 26 23:25:55.061: INFO: Pod "pod-configmaps-ba2bbe0f-cb84-4288-8b63-493712ab2f8f": Phase="Pending", Reason="", readiness=false. Elapsed: 23.948153ms
Aug 26 23:25:57.073: INFO: Pod "pod-configmaps-ba2bbe0f-cb84-4288-8b63-493712ab2f8f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036042056s
Aug 26 23:25:59.077: INFO: Pod "pod-configmaps-ba2bbe0f-cb84-4288-8b63-493712ab2f8f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039238688s
Aug 26 23:26:01.098: INFO: Pod "pod-configmaps-ba2bbe0f-cb84-4288-8b63-493712ab2f8f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.060318338s
Aug 26 23:26:03.102: INFO: Pod "pod-configmaps-ba2bbe0f-cb84-4288-8b63-493712ab2f8f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.064498007s
STEP: Saw pod success
Aug 26 23:26:03.102: INFO: Pod "pod-configmaps-ba2bbe0f-cb84-4288-8b63-493712ab2f8f" satisfied condition "success or failure"
Aug 26 23:26:03.105: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-ba2bbe0f-cb84-4288-8b63-493712ab2f8f container configmap-volume-test: 
STEP: delete the pod
Aug 26 23:26:03.154: INFO: Waiting for pod pod-configmaps-ba2bbe0f-cb84-4288-8b63-493712ab2f8f to disappear
Aug 26 23:26:03.167: INFO: Pod pod-configmaps-ba2bbe0f-cb84-4288-8b63-493712ab2f8f no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:26:03.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6103" for this suite.

• [SLOW TEST:8.209 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":172,"skipped":2774,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny attaching pod [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:26:03.174: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 26 23:26:04.570: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 26 23:26:06.579: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734081164, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734081164, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734081164, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734081164, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 23:26:08.637: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734081164, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734081164, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734081164, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734081164, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 26 23:26:11.647: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny attaching pod [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the webhook via the AdmissionRegistration API
STEP: create a pod
STEP: 'kubectl attach' the pod, should be denied by the webhook
Aug 26 23:26:15.811: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config attach --namespace=webhook-9725 to-be-attached-pod -i -c=container1'
Aug 26 23:26:15.945: INFO: rc: 1
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:26:15.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-9725" for this suite.
STEP: Destroying namespace "webhook-9725-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:13.307 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny attaching pod [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":278,"completed":173,"skipped":2780,"failed":0}
SSSSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] HostPath
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:26:16.482: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test hostPath mode
Aug 26 23:26:16.553: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-8132" to be "success or failure"
Aug 26 23:26:16.570: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 16.827583ms
Aug 26 23:26:18.576: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022814055s
Aug 26 23:26:20.580: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027015232s
Aug 26 23:26:22.631: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 6.077979208s
Aug 26 23:26:24.635: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.081283674s
STEP: Saw pod success
Aug 26 23:26:24.635: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Aug 26 23:26:24.639: INFO: Trying to get logs from node jerma-worker pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Aug 26 23:26:24.679: INFO: Waiting for pod pod-host-path-test to disappear
Aug 26 23:26:24.720: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:26:24.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-8132" for this suite.

• [SLOW TEST:8.246 seconds]
[sig-storage] HostPath
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":174,"skipped":2786,"failed":0}
SS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] version v1
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:26:24.729: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-sc5nv in namespace proxy-2876
I0826 23:26:24.861411       6 runners.go:189] Created replication controller with name: proxy-service-sc5nv, namespace: proxy-2876, replica count: 1
I0826 23:26:25.911849       6 runners.go:189] proxy-service-sc5nv Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0826 23:26:26.912097       6 runners.go:189] proxy-service-sc5nv Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0826 23:26:27.912312       6 runners.go:189] proxy-service-sc5nv Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0826 23:26:28.912507       6 runners.go:189] proxy-service-sc5nv Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0826 23:26:29.912715       6 runners.go:189] proxy-service-sc5nv Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0826 23:26:30.913029       6 runners.go:189] proxy-service-sc5nv Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0826 23:26:31.913306       6 runners.go:189] proxy-service-sc5nv Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0826 23:26:32.913492       6 runners.go:189] proxy-service-sc5nv Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0826 23:26:33.913729       6 runners.go:189] proxy-service-sc5nv Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0826 23:26:34.913929       6 runners.go:189] proxy-service-sc5nv Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0826 23:26:35.914166       6 runners.go:189] proxy-service-sc5nv Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0826 23:26:36.914409       6 runners.go:189] proxy-service-sc5nv Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0826 23:26:37.914657       6 runners.go:189] proxy-service-sc5nv Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Aug 26 23:26:37.920: INFO: setup took 13.146341862s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Aug 26 23:26:37.925: INFO: (0) /api/v1/namespaces/proxy-2876/pods/proxy-service-sc5nv-4lm44:160/proxy/: foo (200; 5.159555ms)
Aug 26 23:26:37.925: INFO: (0) /api/v1/namespaces/proxy-2876/pods/proxy-service-sc5nv-4lm44:1080/proxy/: test<... (200; 5.399639ms)
Aug 26 23:26:37.925: INFO: (0) /api/v1/namespaces/proxy-2876/pods/proxy-service-sc5nv-4lm44:162/proxy/: bar (200; 5.406492ms)
Aug 26 23:26:37.930: INFO: (0) /api/v1/namespaces/proxy-2876/pods/http:proxy-service-sc5nv-4lm44:162/proxy/: bar (200; 9.962033ms)
Aug 26 23:26:37.930: INFO: (0) /api/v1/namespaces/proxy-2876/pods/proxy-service-sc5nv-4lm44/proxy/: test (200; 9.874071ms)
Aug 26 23:26:37.930: INFO: (0) /api/v1/namespaces/proxy-2876/pods/http:proxy-service-sc5nv-4lm44:160/proxy/: foo (200; 10.647772ms)
Aug 26 23:26:37.930: INFO: (0) /api/v1/namespaces/proxy-2876/services/proxy-service-sc5nv:portname2/proxy/: bar (200; 10.653544ms)
Aug 26 23:26:37.931: INFO: (0) /api/v1/namespaces/proxy-2876/pods/http:proxy-service-sc5nv-4lm44:1080/proxy/: ... (200; 11.00395ms)
Aug 26 23:26:37.931: INFO: (0) /api/v1/namespaces/proxy-2876/services/proxy-service-sc5nv:portname1/proxy/: foo (200; 11.132252ms)
Aug 26 23:26:37.931: INFO: (0) /api/v1/namespaces/proxy-2876/services/http:proxy-service-sc5nv:portname2/proxy/: bar (200; 11.311202ms)
Aug 26 23:26:37.931: INFO: (0) /api/v1/namespaces/proxy-2876/services/http:proxy-service-sc5nv:portname1/proxy/: foo (200; 11.36808ms)
Aug 26 23:26:37.933: INFO: (0) /api/v1/namespaces/proxy-2876/pods/https:proxy-service-sc5nv-4lm44:443/proxy/: test<... (200; 3.486646ms)
Aug 26 23:26:37.940: INFO: (1) /api/v1/namespaces/proxy-2876/pods/proxy-service-sc5nv-4lm44:160/proxy/: foo (200; 3.765938ms)
Aug 26 23:26:37.940: INFO: (1) /api/v1/namespaces/proxy-2876/pods/proxy-service-sc5nv-4lm44:162/proxy/: bar (200; 3.853145ms)
Aug 26 23:26:37.940: INFO: (1) /api/v1/namespaces/proxy-2876/pods/http:proxy-service-sc5nv-4lm44:160/proxy/: foo (200; 3.976321ms)
Aug 26 23:26:37.940: INFO: (1) /api/v1/namespaces/proxy-2876/pods/https:proxy-service-sc5nv-4lm44:460/proxy/: tls baz (200; 4.031206ms)
Aug 26 23:26:37.940: INFO: (1) /api/v1/namespaces/proxy-2876/pods/https:proxy-service-sc5nv-4lm44:443/proxy/: test (200; 4.469518ms)
Aug 26 23:26:37.941: INFO: (1) /api/v1/namespaces/proxy-2876/pods/http:proxy-service-sc5nv-4lm44:1080/proxy/: ... (200; 5.014234ms)
Aug 26 23:26:37.941: INFO: (1) /api/v1/namespaces/proxy-2876/services/proxy-service-sc5nv:portname1/proxy/: foo (200; 5.023904ms)
Aug 26 23:26:37.941: INFO: (1) /api/v1/namespaces/proxy-2876/services/http:proxy-service-sc5nv:portname1/proxy/: foo (200; 5.016825ms)
Aug 26 23:26:37.941: INFO: (1) /api/v1/namespaces/proxy-2876/services/proxy-service-sc5nv:portname2/proxy/: bar (200; 5.016986ms)
Aug 26 23:26:37.941: INFO: (1) /api/v1/namespaces/proxy-2876/services/https:proxy-service-sc5nv:tlsportname1/proxy/: tls baz (200; 5.073665ms)
Aug 26 23:26:37.941: INFO: (1) /api/v1/namespaces/proxy-2876/services/http:proxy-service-sc5nv:portname2/proxy/: bar (200; 5.128074ms)
Aug 26 23:26:37.941: INFO: (1) /api/v1/namespaces/proxy-2876/services/https:proxy-service-sc5nv:tlsportname2/proxy/: tls qux (200; 5.235403ms)
Aug 26 23:26:37.941: INFO: (1) /api/v1/namespaces/proxy-2876/pods/https:proxy-service-sc5nv-4lm44:462/proxy/: tls qux (200; 5.128355ms)
Aug 26 23:26:37.944: INFO: (2) /api/v1/namespaces/proxy-2876/pods/https:proxy-service-sc5nv-4lm44:443/proxy/: test (200; 5.408418ms)
Aug 26 23:26:37.947: INFO: (2) /api/v1/namespaces/proxy-2876/services/http:proxy-service-sc5nv:portname2/proxy/: bar (200; 5.507057ms)
Aug 26 23:26:37.947: INFO: (2) /api/v1/namespaces/proxy-2876/pods/http:proxy-service-sc5nv-4lm44:1080/proxy/: ... (200; 5.447417ms)
Aug 26 23:26:37.947: INFO: (2) /api/v1/namespaces/proxy-2876/pods/proxy-service-sc5nv-4lm44:160/proxy/: foo (200; 5.495931ms)
Aug 26 23:26:37.947: INFO: (2) /api/v1/namespaces/proxy-2876/pods/https:proxy-service-sc5nv-4lm44:460/proxy/: tls baz (200; 5.52242ms)
Aug 26 23:26:37.947: INFO: (2) /api/v1/namespaces/proxy-2876/pods/http:proxy-service-sc5nv-4lm44:162/proxy/: bar (200; 5.454731ms)
Aug 26 23:26:37.947: INFO: (2) /api/v1/namespaces/proxy-2876/pods/proxy-service-sc5nv-4lm44:1080/proxy/: test<... (200; 5.567358ms)
Aug 26 23:26:37.949: INFO: (3) /api/v1/namespaces/proxy-2876/pods/https:proxy-service-sc5nv-4lm44:460/proxy/: tls baz (200; 2.246104ms)
Aug 26 23:26:37.950: INFO: (3) /api/v1/namespaces/proxy-2876/pods/http:proxy-service-sc5nv-4lm44:160/proxy/: foo (200; 2.891795ms)
Aug 26 23:26:37.950: INFO: (3) /api/v1/namespaces/proxy-2876/pods/proxy-service-sc5nv-4lm44:162/proxy/: bar (200; 3.06074ms)
Aug 26 23:26:37.951: INFO: (3) /api/v1/namespaces/proxy-2876/pods/proxy-service-sc5nv-4lm44:160/proxy/: foo (200; 3.530396ms)
Aug 26 23:26:37.951: INFO: (3) /api/v1/namespaces/proxy-2876/pods/proxy-service-sc5nv-4lm44/proxy/: test (200; 3.596972ms)
Aug 26 23:26:37.951: INFO: (3) /api/v1/namespaces/proxy-2876/pods/http:proxy-service-sc5nv-4lm44:1080/proxy/: ... (200; 3.583507ms)
Aug 26 23:26:37.951: INFO: (3) /api/v1/namespaces/proxy-2876/pods/proxy-service-sc5nv-4lm44:1080/proxy/: test<... (200; 3.727124ms)
Aug 26 23:26:37.951: INFO: (3) /api/v1/namespaces/proxy-2876/pods/https:proxy-service-sc5nv-4lm44:462/proxy/: tls qux (200; 3.719936ms)
Aug 26 23:26:37.951: INFO: (3) /api/v1/namespaces/proxy-2876/pods/https:proxy-service-sc5nv-4lm44:443/proxy/: ... (200; 3.770684ms)
Aug 26 23:26:37.956: INFO: (4) /api/v1/namespaces/proxy-2876/pods/https:proxy-service-sc5nv-4lm44:462/proxy/: tls qux (200; 3.792972ms)
Aug 26 23:26:37.956: INFO: (4) /api/v1/namespaces/proxy-2876/pods/http:proxy-service-sc5nv-4lm44:160/proxy/: foo (200; 3.822012ms)
Aug 26 23:26:37.956: INFO: (4) /api/v1/namespaces/proxy-2876/pods/proxy-service-sc5nv-4lm44:1080/proxy/: test<... (200; 3.849631ms)
Aug 26 23:26:37.956: INFO: (4) /api/v1/namespaces/proxy-2876/services/https:proxy-service-sc5nv:tlsportname2/proxy/: tls qux (200; 3.970207ms)
Aug 26 23:26:37.956: INFO: (4) /api/v1/namespaces/proxy-2876/services/proxy-service-sc5nv:portname1/proxy/: foo (200; 4.123505ms)
Aug 26 23:26:37.956: INFO: (4) /api/v1/namespaces/proxy-2876/services/http:proxy-service-sc5nv:portname1/proxy/: foo (200; 4.074963ms)
Aug 26 23:26:37.957: INFO: (4) /api/v1/namespaces/proxy-2876/services/http:proxy-service-sc5nv:portname2/proxy/: bar (200; 4.361738ms)
Aug 26 23:26:37.957: INFO: (4) /api/v1/namespaces/proxy-2876/services/proxy-service-sc5nv:portname2/proxy/: bar (200; 4.505964ms)
Aug 26 23:26:37.957: INFO: (4) /api/v1/namespaces/proxy-2876/services/https:proxy-service-sc5nv:tlsportname1/proxy/: tls baz (200; 4.560467ms)
Aug 26 23:26:37.957: INFO: (4) /api/v1/namespaces/proxy-2876/pods/https:proxy-service-sc5nv-4lm44:460/proxy/: tls baz (200; 4.689206ms)
Aug 26 23:26:37.957: INFO: (4) /api/v1/namespaces/proxy-2876/pods/proxy-service-sc5nv-4lm44:162/proxy/: bar (200; 4.71949ms)
Aug 26 23:26:37.957: INFO: (4) /api/v1/namespaces/proxy-2876/pods/https:proxy-service-sc5nv-4lm44:443/proxy/: test (200; 4.774176ms)
Aug 26 23:26:37.957: INFO: (4) /api/v1/namespaces/proxy-2876/pods/http:proxy-service-sc5nv-4lm44:162/proxy/: bar (200; 4.8585ms)
Aug 26 23:26:37.960: INFO: (5) /api/v1/namespaces/proxy-2876/pods/http:proxy-service-sc5nv-4lm44:162/proxy/: bar (200; 2.83153ms)
Aug 26 23:26:37.960: INFO: (5) /api/v1/namespaces/proxy-2876/pods/https:proxy-service-sc5nv-4lm44:460/proxy/: tls baz (200; 2.85113ms)
Aug 26 23:26:37.960: INFO: (5) /api/v1/namespaces/proxy-2876/pods/https:proxy-service-sc5nv-4lm44:462/proxy/: tls qux (200; 2.900477ms)
Aug 26 23:26:37.960: INFO: (5) /api/v1/namespaces/proxy-2876/pods/proxy-service-sc5nv-4lm44:160/proxy/: foo (200; 2.966277ms)
Aug 26 23:26:37.960: INFO: (5) /api/v1/namespaces/proxy-2876/pods/proxy-service-sc5nv-4lm44/proxy/: test (200; 3.071476ms)
Aug 26 23:26:37.960: INFO: (5) /api/v1/namespaces/proxy-2876/pods/proxy-service-sc5nv-4lm44:162/proxy/: bar (200; 3.089676ms)
Aug 26 23:26:37.960: INFO: (5) /api/v1/namespaces/proxy-2876/pods/proxy-service-sc5nv-4lm44:1080/proxy/: test<... (200; 3.097449ms)
Aug 26 23:26:37.961: INFO: (5) /api/v1/namespaces/proxy-2876/pods/http:proxy-service-sc5nv-4lm44:160/proxy/: foo (200; 3.531607ms)
Aug 26 23:26:37.961: INFO: (5) /api/v1/namespaces/proxy-2876/pods/http:proxy-service-sc5nv-4lm44:1080/proxy/: ... (200; 3.614565ms)
Aug 26 23:26:37.961: INFO: (5) /api/v1/namespaces/proxy-2876/services/proxy-service-sc5nv:portname1/proxy/: foo (200; 3.686756ms)
Aug 26 23:26:37.961: INFO: (5) /api/v1/namespaces/proxy-2876/pods/https:proxy-service-sc5nv-4lm44:443/proxy/: test<... (200; 2.978074ms)
Aug 26 23:26:37.964: INFO: (6) /api/v1/namespaces/proxy-2876/pods/http:proxy-service-sc5nv-4lm44:162/proxy/: bar (200; 2.950906ms)
Aug 26 23:26:37.964: INFO: (6) /api/v1/namespaces/proxy-2876/pods/http:proxy-service-sc5nv-4lm44:160/proxy/: foo (200; 3.013028ms)
Aug 26 23:26:37.964: INFO: (6) /api/v1/namespaces/proxy-2876/pods/proxy-service-sc5nv-4lm44:160/proxy/: foo (200; 2.964465ms)
Aug 26 23:26:37.964: INFO: (6) /api/v1/namespaces/proxy-2876/pods/https:proxy-service-sc5nv-4lm44:443/proxy/: test (200; 3.050474ms)
Aug 26 23:26:37.965: INFO: (6) /api/v1/namespaces/proxy-2876/services/proxy-service-sc5nv:portname2/proxy/: bar (200; 3.655137ms)
Aug 26 23:26:37.965: INFO: (6) /api/v1/namespaces/proxy-2876/services/http:proxy-service-sc5nv:portname2/proxy/: bar (200; 3.933529ms)
Aug 26 23:26:37.965: INFO: (6) /api/v1/namespaces/proxy-2876/services/proxy-service-sc5nv:portname1/proxy/: foo (200; 4.105812ms)
Aug 26 23:26:37.965: INFO: (6) /api/v1/namespaces/proxy-2876/pods/http:proxy-service-sc5nv-4lm44:1080/proxy/: ... (200; 4.100992ms)
Aug 26 23:26:37.965: INFO: (6) /api/v1/namespaces/proxy-2876/pods/https:proxy-service-sc5nv-4lm44:462/proxy/: tls qux (200; 4.169834ms)
Aug 26 23:26:37.965: INFO: (6) /api/v1/namespaces/proxy-2876/services/https:proxy-service-sc5nv:tlsportname2/proxy/: tls qux (200; 4.201709ms)
Aug 26 23:26:37.966: INFO: (6) /api/v1/namespaces/proxy-2876/services/http:proxy-service-sc5nv:portname1/proxy/: foo (200; 4.531155ms)
Aug 26 23:26:37.966: INFO: (6) /api/v1/namespaces/proxy-2876/services/https:proxy-service-sc5nv:tlsportname1/proxy/: tls baz (200; 4.659275ms)
Aug 26 23:26:37.968: INFO: (7) /api/v1/namespaces/proxy-2876/pods/proxy-service-sc5nv-4lm44:1080/proxy/: test<... (200; 1.918017ms)
Aug 26 23:26:37.968: INFO: (7) /api/v1/namespaces/proxy-2876/pods/proxy-service-sc5nv-4lm44/proxy/: test (200; 2.060798ms)
Aug 26 23:26:37.969: INFO: (7) /api/v1/namespaces/proxy-2876/pods/http:proxy-service-sc5nv-4lm44:160/proxy/: foo (200; 2.678565ms)
Aug 26 23:26:37.969: INFO: (7) /api/v1/namespaces/proxy-2876/pods/https:proxy-service-sc5nv-4lm44:443/proxy/: ... (200; 3.385197ms)
Aug 26 23:26:37.970: INFO: (7) /api/v1/namespaces/proxy-2876/services/https:proxy-service-sc5nv:tlsportname2/proxy/: tls qux (200; 3.602428ms)
Aug 26 23:26:37.970: INFO: (7) /api/v1/namespaces/proxy-2876/pods/https:proxy-service-sc5nv-4lm44:462/proxy/: tls qux (200; 3.609824ms)
Aug 26 23:26:37.970: INFO: (7) /api/v1/namespaces/proxy-2876/services/proxy-service-sc5nv:portname2/proxy/: bar (200; 4.112982ms)
Aug 26 23:26:37.970: INFO: (7) /api/v1/namespaces/proxy-2876/services/proxy-service-sc5nv:portname1/proxy/: foo (200; 4.120174ms)
Aug 26 23:26:37.970: INFO: (7) /api/v1/namespaces/proxy-2876/pods/http:proxy-service-sc5nv-4lm44:162/proxy/: bar (200; 4.080011ms)
Aug 26 23:26:37.970: INFO: (7) /api/v1/namespaces/proxy-2876/services/http:proxy-service-sc5nv:portname2/proxy/: bar (200; 4.435929ms)
Aug 26 23:26:37.971: INFO: (7) /api/v1/namespaces/proxy-2876/services/https:proxy-service-sc5nv:tlsportname1/proxy/: tls baz (200; 4.49654ms)
Aug 26 23:26:37.971: INFO: (7) /api/v1/namespaces/proxy-2876/services/http:proxy-service-sc5nv:portname1/proxy/: foo (200; 4.528541ms)
Aug 26 23:26:37.973: INFO: (8) /api/v1/namespaces/proxy-2876/pods/http:proxy-service-sc5nv-4lm44:160/proxy/: foo (200; 2.684875ms)
Aug 26 23:26:37.974: INFO: (8) /api/v1/namespaces/proxy-2876/pods/https:proxy-service-sc5nv-4lm44:443/proxy/: test (200; 3.137776ms)
Aug 26 23:26:37.974: INFO: (8) /api/v1/namespaces/proxy-2876/pods/http:proxy-service-sc5nv-4lm44:162/proxy/: bar (200; 3.167078ms)
Aug 26 23:26:37.974: INFO: (8) /api/v1/namespaces/proxy-2876/pods/proxy-service-sc5nv-4lm44:1080/proxy/: test<... (200; 3.146166ms)
Aug 26 23:26:37.974: INFO: (8) /api/v1/namespaces/proxy-2876/pods/proxy-service-sc5nv-4lm44:162/proxy/: bar (200; 3.200731ms)
Aug 26 23:26:37.974: INFO: (8) /api/v1/namespaces/proxy-2876/pods/https:proxy-service-sc5nv-4lm44:460/proxy/: tls baz (200; 3.181803ms)
Aug 26 23:26:37.974: INFO: (8) /api/v1/namespaces/proxy-2876/pods/https:proxy-service-sc5nv-4lm44:462/proxy/: tls qux (200; 3.190012ms)
Aug 26 23:26:37.974: INFO: (8) /api/v1/namespaces/proxy-2876/pods/proxy-service-sc5nv-4lm44:160/proxy/: foo (200; 3.261867ms)
Aug 26 23:26:37.974: INFO: (8) /api/v1/namespaces/proxy-2876/pods/http:proxy-service-sc5nv-4lm44:1080/proxy/: ... (200; 3.41205ms)
Aug 26 23:26:37.975: INFO: (8) /api/v1/namespaces/proxy-2876/services/http:proxy-service-sc5nv:portname2/proxy/: bar (200; 3.905149ms)
Aug 26 23:26:37.975: INFO: (8) /api/v1/namespaces/proxy-2876/services/proxy-service-sc5nv:portname1/proxy/: foo (200; 3.910768ms)
Aug 26 23:26:37.975: INFO: (8) /api/v1/namespaces/proxy-2876/services/https:proxy-service-sc5nv:tlsportname1/proxy/: tls baz (200; 4.028524ms)
Aug 26 23:26:37.975: INFO: (8) /api/v1/namespaces/proxy-2876/services/http:proxy-service-sc5nv:portname1/proxy/: foo (200; 3.87873ms)
Aug 26 23:26:37.975: INFO: (8) /api/v1/namespaces/proxy-2876/services/https:proxy-service-sc5nv:tlsportname2/proxy/: tls qux (200; 4.120556ms)
Aug 26 23:26:37.975: INFO: (8) /api/v1/namespaces/proxy-2876/services/proxy-service-sc5nv:portname2/proxy/: bar (200; 4.31295ms)
Aug 26 23:26:37.977: INFO: (9) /api/v1/namespaces/proxy-2876/pods/http:proxy-service-sc5nv-4lm44:162/proxy/: bar (200; 2.429387ms)
Aug 26 23:26:37.978: INFO: (9) /api/v1/namespaces/proxy-2876/services/http:proxy-service-sc5nv:portname2/proxy/: bar (200; 3.551582ms)
Aug 26 23:26:37.979: INFO: (9) /api/v1/namespaces/proxy-2876/services/proxy-service-sc5nv:portname2/proxy/: bar (200; 3.536702ms)
Aug 26 23:26:37.979: INFO: (9) /api/v1/namespaces/proxy-2876/services/proxy-service-sc5nv:portname1/proxy/: foo (200; 3.762658ms)
Aug 26 23:26:37.979: INFO: (9) /api/v1/namespaces/proxy-2876/pods/proxy-service-sc5nv-4lm44:1080/proxy/: test<... (200; 3.81188ms)
Aug 26 23:26:37.979: INFO: (9) /api/v1/namespaces/proxy-2876/services/http:proxy-service-sc5nv:portname1/proxy/: foo (200; 3.83278ms)
Aug 26 23:26:37.979: INFO: (9) /api/v1/namespaces/proxy-2876/services/https:proxy-service-sc5nv:tlsportname1/proxy/: tls baz (200; 3.85413ms)
Aug 26 23:26:37.979: INFO: (9) /api/v1/namespaces/proxy-2876/pods/proxy-service-sc5nv-4lm44:162/proxy/: bar (200; 3.901375ms)
Aug 26 23:26:37.979: INFO: (9) /api/v1/namespaces/proxy-2876/pods/https:proxy-service-sc5nv-4lm44:443/proxy/: ... (200; 4.049808ms)
Aug 26 23:26:37.979: INFO: (9) /api/v1/namespaces/proxy-2876/pods/http:proxy-service-sc5nv-4lm44:160/proxy/: foo (200; 4.130838ms)
Aug 26 23:26:37.979: INFO: (9) /api/v1/namespaces/proxy-2876/services/https:proxy-service-sc5nv:tlsportname2/proxy/: tls qux (200; 4.144211ms)
Aug 26 23:26:37.979: INFO: (9) /api/v1/namespaces/proxy-2876/pods/proxy-service-sc5nv-4lm44:160/proxy/: foo (200; 4.210789ms)
Aug 26 23:26:37.979: INFO: (9) /api/v1/namespaces/proxy-2876/pods/https:proxy-service-sc5nv-4lm44:460/proxy/: tls baz (200; 4.232503ms)
Aug 26 23:26:37.979: INFO: (9) /api/v1/namespaces/proxy-2876/pods/proxy-service-sc5nv-4lm44/proxy/: test (200; 4.216702ms)
Aug 26 23:26:37.981: INFO: (10) /api/v1/namespaces/proxy-2876/pods/proxy-service-sc5nv-4lm44/proxy/: test (200; 2.129722ms)
Aug 26 23:26:37.983: INFO: (10) /api/v1/namespaces/proxy-2876/pods/proxy-service-sc5nv-4lm44:1080/proxy/: test<... (200; 3.24681ms)
Aug 26 23:26:37.983: INFO: (10) /api/v1/namespaces/proxy-2876/pods/http:proxy-service-sc5nv-4lm44:162/proxy/: bar (200; 3.923363ms)
Aug 26 23:26:37.983: INFO: (10) /api/v1/namespaces/proxy-2876/pods/http:proxy-service-sc5nv-4lm44:160/proxy/: foo (200; 2.95623ms)
Aug 26 23:26:37.983: INFO: (10) /api/v1/namespaces/proxy-2876/pods/http:proxy-service-sc5nv-4lm44:1080/proxy/: ... (200; 4.009056ms)
Aug 26 23:26:37.983: INFO: (10) /api/v1/namespaces/proxy-2876/pods/proxy-service-sc5nv-4lm44:162/proxy/: bar (200; 3.27125ms)
Aug 26 23:26:37.983: INFO: (10) /api/v1/namespaces/proxy-2876/pods/https:proxy-service-sc5nv-4lm44:460/proxy/: tls baz (200; 3.020581ms)
Aug 26 23:26:37.983: INFO: (10) /api/v1/namespaces/proxy-2876/pods/proxy-service-sc5nv-4lm44:160/proxy/: foo (200; 3.333158ms)
Aug 26 23:26:37.984: INFO: (10) /api/v1/namespaces/proxy-2876/pods/https:proxy-service-sc5nv-4lm44:443/proxy/: test<... (200; 3.445816ms)
Aug 26 23:26:37.989: INFO: (11) /api/v1/namespaces/proxy-2876/services/proxy-service-sc5nv:portname2/proxy/: bar (200; 3.469968ms)
Aug 26 23:26:37.989: INFO: (11) /api/v1/namespaces/proxy-2876/pods/https:proxy-service-sc5nv-4lm44:443/proxy/: test (200; 5.063021ms)
Aug 26 23:26:37.990: INFO: (11) /api/v1/namespaces/proxy-2876/services/http:proxy-service-sc5nv:portname2/proxy/: bar (200; 5.12735ms)
Aug 26 23:26:37.990: INFO: (11) /api/v1/namespaces/proxy-2876/pods/http:proxy-service-sc5nv-4lm44:162/proxy/: bar (200; 5.144812ms)
Aug 26 23:26:37.990: INFO: (11) /api/v1/namespaces/proxy-2876/services/https:proxy-service-sc5nv:tlsportname1/proxy/: tls baz (200; 5.115182ms)
Aug 26 23:26:37.990: INFO: (11) /api/v1/namespaces/proxy-2876/pods/proxy-service-sc5nv-4lm44:160/proxy/: foo (200; 5.145305ms)
Aug 26 23:26:37.990: INFO: (11) /api/v1/namespaces/proxy-2876/pods/http:proxy-service-sc5nv-4lm44:1080/proxy/: ... (200; 5.233494ms)
Aug 26 23:26:37.990: INFO: (11) /api/v1/namespaces/proxy-2876/services/https:proxy-service-sc5nv:tlsportname2/proxy/: tls qux (200; 5.333225ms)
Aug 26 23:26:37.990: INFO: (11) /api/v1/namespaces/proxy-2876/services/proxy-service-sc5nv:portname1/proxy/: foo (200; 5.350895ms)
Aug 26 23:26:37.996: INFO: (12) /api/v1/namespaces/proxy-2876/pods/proxy-service-sc5nv-4lm44:1080/proxy/: test<... (200; 4.97677ms)
Aug 26 23:26:37.996: INFO: (12) /api/v1/namespaces/proxy-2876/pods/proxy-service-sc5nv-4lm44/proxy/: test (200; 5.251766ms)
Aug 26 23:26:37.996: INFO: (12) /api/v1/namespaces/proxy-2876/pods/https:proxy-service-sc5nv-4lm44:462/proxy/: tls qux (200; 5.581749ms)
Aug 26 23:26:37.996: INFO: (12) /api/v1/namespaces/proxy-2876/pods/http:proxy-service-sc5nv-4lm44:162/proxy/: bar (200; 5.510328ms)
Aug 26 23:26:37.996: INFO: (12) /api/v1/namespaces/proxy-2876/pods/http:proxy-service-sc5nv-4lm44:160/proxy/: foo (200; 5.43936ms)
Aug 26 23:26:37.996: INFO: (12) /api/v1/namespaces/proxy-2876/pods/http:proxy-service-sc5nv-4lm44:1080/proxy/: ... (200; 5.72406ms)
Aug 26 23:26:37.996: INFO: (12) /api/v1/namespaces/proxy-2876/pods/https:proxy-service-sc5nv-4lm44:443/proxy/: test (200; 6.629049ms)
Aug 26 23:26:38.005: INFO: (13) /api/v1/namespaces/proxy-2876/pods/https:proxy-service-sc5nv-4lm44:462/proxy/: tls qux (200; 6.877546ms)
Aug 26 23:26:38.005: INFO: (13) /api/v1/namespaces/proxy-2876/pods/https:proxy-service-sc5nv-4lm44:443/proxy/: ... (200; 9.073985ms)
Aug 26 23:26:38.007: INFO: (13) /api/v1/namespaces/proxy-2876/services/http:proxy-service-sc5nv:portname1/proxy/: foo (200; 9.531854ms)
Aug 26 23:26:38.007: INFO: (13) /api/v1/namespaces/proxy-2876/services/proxy-service-sc5nv:portname2/proxy/: bar (200; 9.441092ms)
Aug 26 23:26:38.007: INFO: (13) /api/v1/namespaces/proxy-2876/services/http:proxy-service-sc5nv:portname2/proxy/: bar (200; 9.542394ms)
Aug 26 23:26:38.007: INFO: (13) /api/v1/namespaces/proxy-2876/services/proxy-service-sc5nv:portname1/proxy/: foo (200; 9.483616ms)
Aug 26 23:26:38.007: INFO: (13) /api/v1/namespaces/proxy-2876/services/https:proxy-service-sc5nv:tlsportname2/proxy/: tls qux (200; 9.523799ms)
Aug 26 23:26:38.007: INFO: (13) /api/v1/namespaces/proxy-2876/pods/https:proxy-service-sc5nv-4lm44:460/proxy/: tls baz (200; 9.614968ms)
Aug 26 23:26:38.007: INFO: (13) /api/v1/namespaces/proxy-2876/pods/proxy-service-sc5nv-4lm44:160/proxy/: foo (200; 9.51171ms)
Aug 26 23:26:38.007: INFO: (13) /api/v1/namespaces/proxy-2876/pods/proxy-service-sc5nv-4lm44:1080/proxy/: test<... (200; 9.554521ms)
Aug 26 23:26:38.007: INFO: (13) /api/v1/namespaces/proxy-2876/services/https:proxy-service-sc5nv:tlsportname1/proxy/: tls baz (200; 9.596662ms)
Aug 26 23:26:38.011: INFO: (14) /api/v1/namespaces/proxy-2876/pods/http:proxy-service-sc5nv-4lm44:162/proxy/: bar (200; 3.174578ms)
Aug 26 23:26:38.011: INFO: (14) /api/v1/namespaces/proxy-2876/pods/proxy-service-sc5nv-4lm44:160/proxy/: foo (200; 3.102503ms)
Aug 26 23:26:38.011: INFO: (14) /api/v1/namespaces/proxy-2876/pods/http:proxy-service-sc5nv-4lm44:160/proxy/: foo (200; 3.617223ms)
Aug 26 23:26:38.011: INFO: (14) /api/v1/namespaces/proxy-2876/pods/http:proxy-service-sc5nv-4lm44:1080/proxy/: ... (200; 3.622566ms)
Aug 26 23:26:38.011: INFO: (14) /api/v1/namespaces/proxy-2876/pods/proxy-service-sc5nv-4lm44/proxy/: test (200; 3.617474ms)
Aug 26 23:26:38.011: INFO: (14) /api/v1/namespaces/proxy-2876/pods/https:proxy-service-sc5nv-4lm44:462/proxy/: tls qux (200; 3.758919ms)
Aug 26 23:26:38.011: INFO: (14) /api/v1/namespaces/proxy-2876/services/https:proxy-service-sc5nv:tlsportname2/proxy/: tls qux (200; 3.805562ms)
Aug 26 23:26:38.011: INFO: (14) /api/v1/namespaces/proxy-2876/services/http:proxy-service-sc5nv:portname2/proxy/: bar (200; 3.949477ms)
Aug 26 23:26:38.012: INFO: (14) /api/v1/namespaces/proxy-2876/services/proxy-service-sc5nv:portname1/proxy/: foo (200; 4.250941ms)
Aug 26 23:26:38.013: INFO: (14) /api/v1/namespaces/proxy-2876/services/http:proxy-service-sc5nv:portname1/proxy/: foo (200; 5.201538ms)
Aug 26 23:26:38.015: INFO: (14) /api/v1/namespaces/proxy-2876/pods/https:proxy-service-sc5nv-4lm44:460/proxy/: tls baz (200; 7.774403ms)
Aug 26 23:26:38.016: INFO: (14) /api/v1/namespaces/proxy-2876/services/proxy-service-sc5nv:portname2/proxy/: bar (200; 8.034726ms)
Aug 26 23:26:38.016: INFO: (14) /api/v1/namespaces/proxy-2876/services/https:proxy-service-sc5nv:tlsportname1/proxy/: tls baz (200; 8.496077ms)
Aug 26 23:26:38.017: INFO: (14) /api/v1/namespaces/proxy-2876/pods/https:proxy-service-sc5nv-4lm44:443/proxy/: test<... (200; 9.443218ms)
Aug 26 23:26:38.025: INFO: (15) /api/v1/namespaces/proxy-2876/pods/http:proxy-service-sc5nv-4lm44:162/proxy/: bar (200; 8.373215ms)
Aug 26 23:26:38.025: INFO: (15) /api/v1/namespaces/proxy-2876/pods/proxy-service-sc5nv-4lm44/proxy/: test (200; 8.362178ms)
Aug 26 23:26:38.025: INFO: (15) /api/v1/namespaces/proxy-2876/pods/https:proxy-service-sc5nv-4lm44:460/proxy/: tls baz (200; 8.27733ms)
Aug 26 23:26:38.025: INFO: (15) /api/v1/namespaces/proxy-2876/pods/http:proxy-service-sc5nv-4lm44:1080/proxy/: ... (200; 8.450268ms)
Aug 26 23:26:38.053: INFO: (15) /api/v1/namespaces/proxy-2876/pods/https:proxy-service-sc5nv-4lm44:443/proxy/: test<... (200; 36.11576ms)
Aug 26 23:26:38.053: INFO: (15) /api/v1/namespaces/proxy-2876/pods/http:proxy-service-sc5nv-4lm44:160/proxy/: foo (200; 36.064576ms)
Aug 26 23:26:38.053: INFO: (15) /api/v1/namespaces/proxy-2876/pods/proxy-service-sc5nv-4lm44:160/proxy/: foo (200; 36.234125ms)
Aug 26 23:26:38.053: INFO: (15) /api/v1/namespaces/proxy-2876/pods/https:proxy-service-sc5nv-4lm44:462/proxy/: tls qux (200; 36.30079ms)
Aug 26 23:26:38.055: INFO: (15) /api/v1/namespaces/proxy-2876/services/https:proxy-service-sc5nv:tlsportname2/proxy/: tls qux (200; 37.797497ms)
Aug 26 23:26:38.055: INFO: (15) /api/v1/namespaces/proxy-2876/services/http:proxy-service-sc5nv:portname2/proxy/: bar (200; 38.049722ms)
Aug 26 23:26:38.055: INFO: (15) /api/v1/namespaces/proxy-2876/services/proxy-service-sc5nv:portname1/proxy/: foo (200; 38.098634ms)
Aug 26 23:26:38.055: INFO: (15) /api/v1/namespaces/proxy-2876/services/https:proxy-service-sc5nv:tlsportname1/proxy/: tls baz (200; 38.166903ms)
Aug 26 23:26:38.055: INFO: (15) /api/v1/namespaces/proxy-2876/services/http:proxy-service-sc5nv:portname1/proxy/: foo (200; 38.20998ms)
Aug 26 23:26:38.055: INFO: (15) /api/v1/namespaces/proxy-2876/services/proxy-service-sc5nv:portname2/proxy/: bar (200; 38.192843ms)
Aug 26 23:26:38.059: INFO: (16) /api/v1/namespaces/proxy-2876/pods/proxy-service-sc5nv-4lm44:160/proxy/: foo (200; 3.252335ms)
Aug 26 23:26:38.059: INFO: (16) /api/v1/namespaces/proxy-2876/pods/http:proxy-service-sc5nv-4lm44:1080/proxy/: ... (200; 3.401907ms)
Aug 26 23:26:38.059: INFO: (16) /api/v1/namespaces/proxy-2876/pods/http:proxy-service-sc5nv-4lm44:162/proxy/: bar (200; 3.449047ms)
Aug 26 23:26:38.059: INFO: (16) /api/v1/namespaces/proxy-2876/pods/proxy-service-sc5nv-4lm44/proxy/: test (200; 3.67693ms)
Aug 26 23:26:38.060: INFO: (16) /api/v1/namespaces/proxy-2876/pods/http:proxy-service-sc5nv-4lm44:160/proxy/: foo (200; 4.125814ms)
Aug 26 23:26:38.060: INFO: (16) /api/v1/namespaces/proxy-2876/pods/proxy-service-sc5nv-4lm44:1080/proxy/: test<... (200; 4.379102ms)
Aug 26 23:26:38.060: INFO: (16) /api/v1/namespaces/proxy-2876/pods/https:proxy-service-sc5nv-4lm44:460/proxy/: tls baz (200; 4.375389ms)
Aug 26 23:26:38.060: INFO: (16) /api/v1/namespaces/proxy-2876/pods/https:proxy-service-sc5nv-4lm44:462/proxy/: tls qux (200; 4.505571ms)
Aug 26 23:26:38.060: INFO: (16) /api/v1/namespaces/proxy-2876/pods/https:proxy-service-sc5nv-4lm44:443/proxy/: ... (200; 8.401851ms)
Aug 26 23:26:38.070: INFO: (17) /api/v1/namespaces/proxy-2876/pods/https:proxy-service-sc5nv-4lm44:443/proxy/: test<... (200; 8.422927ms)
Aug 26 23:26:38.070: INFO: (17) /api/v1/namespaces/proxy-2876/services/http:proxy-service-sc5nv:portname2/proxy/: bar (200; 8.449346ms)
Aug 26 23:26:38.070: INFO: (17) /api/v1/namespaces/proxy-2876/pods/https:proxy-service-sc5nv-4lm44:462/proxy/: tls qux (200; 8.498404ms)
Aug 26 23:26:38.070: INFO: (17) /api/v1/namespaces/proxy-2876/pods/http:proxy-service-sc5nv-4lm44:162/proxy/: bar (200; 8.435294ms)
Aug 26 23:26:38.070: INFO: (17) /api/v1/namespaces/proxy-2876/pods/proxy-service-sc5nv-4lm44/proxy/: test (200; 8.481602ms)
Aug 26 23:26:38.070: INFO: (17) /api/v1/namespaces/proxy-2876/services/proxy-service-sc5nv:portname2/proxy/: bar (200; 8.488662ms)
Aug 26 23:26:38.070: INFO: (17) /api/v1/namespaces/proxy-2876/services/https:proxy-service-sc5nv:tlsportname2/proxy/: tls qux (200; 8.470254ms)
Aug 26 23:26:38.070: INFO: (17) /api/v1/namespaces/proxy-2876/services/http:proxy-service-sc5nv:portname1/proxy/: foo (200; 8.508345ms)
Aug 26 23:26:38.070: INFO: (17) /api/v1/namespaces/proxy-2876/services/https:proxy-service-sc5nv:tlsportname1/proxy/: tls baz (200; 8.825569ms)
Aug 26 23:26:38.072: INFO: (17) /api/v1/namespaces/proxy-2876/services/proxy-service-sc5nv:portname1/proxy/: foo (200; 10.769173ms)
Aug 26 23:26:38.075: INFO: (18) /api/v1/namespaces/proxy-2876/services/http:proxy-service-sc5nv:portname1/proxy/: foo (200; 3.092758ms)
Aug 26 23:26:38.075: INFO: (18) /api/v1/namespaces/proxy-2876/pods/proxy-service-sc5nv-4lm44:160/proxy/: foo (200; 3.105114ms)
Aug 26 23:26:38.075: INFO: (18) /api/v1/namespaces/proxy-2876/pods/https:proxy-service-sc5nv-4lm44:462/proxy/: tls qux (200; 3.191871ms)
Aug 26 23:26:38.075: INFO: (18) /api/v1/namespaces/proxy-2876/pods/http:proxy-service-sc5nv-4lm44:1080/proxy/: ... (200; 3.280519ms)
Aug 26 23:26:38.075: INFO: (18) /api/v1/namespaces/proxy-2876/pods/http:proxy-service-sc5nv-4lm44:162/proxy/: bar (200; 3.255857ms)
Aug 26 23:26:38.076: INFO: (18) /api/v1/namespaces/proxy-2876/pods/https:proxy-service-sc5nv-4lm44:443/proxy/: test<... (200; 4.126897ms)
Aug 26 23:26:38.076: INFO: (18) /api/v1/namespaces/proxy-2876/pods/https:proxy-service-sc5nv-4lm44:460/proxy/: tls baz (200; 4.182066ms)
Aug 26 23:26:38.076: INFO: (18) /api/v1/namespaces/proxy-2876/services/proxy-service-sc5nv:portname2/proxy/: bar (200; 4.097987ms)
Aug 26 23:26:38.076: INFO: (18) /api/v1/namespaces/proxy-2876/services/proxy-service-sc5nv:portname1/proxy/: foo (200; 4.120411ms)
Aug 26 23:26:38.076: INFO: (18) /api/v1/namespaces/proxy-2876/pods/http:proxy-service-sc5nv-4lm44:160/proxy/: foo (200; 4.150693ms)
Aug 26 23:26:38.076: INFO: (18) /api/v1/namespaces/proxy-2876/pods/proxy-service-sc5nv-4lm44/proxy/: test (200; 4.12689ms)
Aug 26 23:26:38.080: INFO: (19) /api/v1/namespaces/proxy-2876/services/proxy-service-sc5nv:portname2/proxy/: bar (200; 3.461572ms)
Aug 26 23:26:38.080: INFO: (19) /api/v1/namespaces/proxy-2876/services/http:proxy-service-sc5nv:portname2/proxy/: bar (200; 3.644262ms)
Aug 26 23:26:38.080: INFO: (19) /api/v1/namespaces/proxy-2876/services/http:proxy-service-sc5nv:portname1/proxy/: foo (200; 3.991692ms)
Aug 26 23:26:38.081: INFO: (19) /api/v1/namespaces/proxy-2876/pods/https:proxy-service-sc5nv-4lm44:460/proxy/: tls baz (200; 4.271929ms)
Aug 26 23:26:38.081: INFO: (19) /api/v1/namespaces/proxy-2876/services/https:proxy-service-sc5nv:tlsportname1/proxy/: tls baz (200; 4.238974ms)
Aug 26 23:26:38.081: INFO: (19) /api/v1/namespaces/proxy-2876/pods/http:proxy-service-sc5nv-4lm44:160/proxy/: foo (200; 4.28168ms)
Aug 26 23:26:38.081: INFO: (19) /api/v1/namespaces/proxy-2876/services/proxy-service-sc5nv:portname1/proxy/: foo (200; 4.243107ms)
Aug 26 23:26:38.081: INFO: (19) /api/v1/namespaces/proxy-2876/pods/proxy-service-sc5nv-4lm44/proxy/: test (200; 4.295121ms)
Aug 26 23:26:38.081: INFO: (19) /api/v1/namespaces/proxy-2876/pods/proxy-service-sc5nv-4lm44:1080/proxy/: test<... (200; 4.234928ms)
Aug 26 23:26:38.081: INFO: (19) /api/v1/namespaces/proxy-2876/pods/https:proxy-service-sc5nv-4lm44:443/proxy/: ... (200; 4.441759ms)
STEP: deleting ReplicationController proxy-service-sc5nv in namespace proxy-2876, will wait for the garbage collector to delete the pods
Aug 26 23:26:38.137: INFO: Deleting ReplicationController proxy-service-sc5nv took: 4.54197ms
Aug 26 23:26:38.437: INFO: Terminating ReplicationController proxy-service-sc5nv pods took: 300.177865ms
[AfterEach] version v1
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:26:51.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-2876" for this suite.

• [SLOW TEST:26.917 seconds]
[sig-network] Proxy
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:57
    should proxy through a service and a pod  [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","total":278,"completed":175,"skipped":2788,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:26:51.647: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should add annotations for pods in rc  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating Agnhost RC
Aug 26 23:26:51.688: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8386'
Aug 26 23:26:51.969: INFO: stderr: ""
Aug 26 23:26:51.969: INFO: stdout: "replicationcontroller/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Aug 26 23:26:52.973: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 26 23:26:52.973: INFO: Found 0 / 1
Aug 26 23:26:54.096: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 26 23:26:54.096: INFO: Found 0 / 1
Aug 26 23:26:54.973: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 26 23:26:54.973: INFO: Found 0 / 1
Aug 26 23:26:55.973: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 26 23:26:55.973: INFO: Found 1 / 1
Aug 26 23:26:55.973: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Aug 26 23:26:55.977: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 26 23:26:55.977: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Aug 26 23:26:55.977: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod agnhost-master-rj29r --namespace=kubectl-8386 -p {"metadata":{"annotations":{"x":"y"}}}'
Aug 26 23:26:56.092: INFO: stderr: ""
Aug 26 23:26:56.092: INFO: stdout: "pod/agnhost-master-rj29r patched\n"
STEP: checking annotations
Aug 26 23:26:56.121: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 26 23:26:56.121: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:26:56.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8386" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc  [Conformance]","total":278,"completed":176,"skipped":2805,"failed":0}

------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:26:56.202: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Aug 26 23:27:06.362: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 26 23:27:06.378: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 26 23:27:08.378: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 26 23:27:08.383: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 26 23:27:10.379: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 26 23:27:10.383: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 26 23:27:12.379: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 26 23:27:12.382: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:27:12.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-3567" for this suite.

• [SLOW TEST:16.190 seconds]
[k8s.io] Container Lifecycle Hook
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when create a pod with lifecycle hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":278,"completed":177,"skipped":2805,"failed":0}
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:27:12.392: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-4491
[It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating stateful set ss in namespace statefulset-4491
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-4491
Aug 26 23:27:12.481: INFO: Found 0 stateful pods, waiting for 1
Aug 26 23:27:22.486: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Aug 26 23:27:22.489: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4491 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 26 23:27:22.733: INFO: stderr: "I0826 23:27:22.618746    3078 log.go:172] (0xc000840b00) (0xc0007da1e0) Create stream\nI0826 23:27:22.618802    3078 log.go:172] (0xc000840b00) (0xc0007da1e0) Stream added, broadcasting: 1\nI0826 23:27:22.621019    3078 log.go:172] (0xc000840b00) Reply frame received for 1\nI0826 23:27:22.621045    3078 log.go:172] (0xc000840b00) (0xc000663ae0) Create stream\nI0826 23:27:22.621053    3078 log.go:172] (0xc000840b00) (0xc000663ae0) Stream added, broadcasting: 3\nI0826 23:27:22.621610    3078 log.go:172] (0xc000840b00) Reply frame received for 3\nI0826 23:27:22.621632    3078 log.go:172] (0xc000840b00) (0xc0007da280) Create stream\nI0826 23:27:22.621638    3078 log.go:172] (0xc000840b00) (0xc0007da280) Stream added, broadcasting: 5\nI0826 23:27:22.622219    3078 log.go:172] (0xc000840b00) Reply frame received for 5\nI0826 23:27:22.681572    3078 log.go:172] (0xc000840b00) Data frame received for 5\nI0826 23:27:22.681600    3078 log.go:172] (0xc0007da280) (5) Data frame handling\nI0826 23:27:22.681620    3078 log.go:172] (0xc0007da280) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0826 23:27:22.718866    3078 log.go:172] (0xc000840b00) Data frame received for 3\nI0826 23:27:22.718909    3078 log.go:172] (0xc000663ae0) (3) Data frame handling\nI0826 23:27:22.718948    3078 log.go:172] (0xc000663ae0) (3) Data frame sent\nI0826 23:27:22.719037    3078 log.go:172] (0xc000840b00) Data frame received for 5\nI0826 23:27:22.719069    3078 log.go:172] (0xc000840b00) Data frame received for 3\nI0826 23:27:22.719116    3078 log.go:172] (0xc000663ae0) (3) Data frame handling\nI0826 23:27:22.719146    3078 log.go:172] (0xc0007da280) (5) Data frame handling\nI0826 23:27:22.721158    3078 log.go:172] (0xc000840b00) Data frame received for 1\nI0826 23:27:22.721178    3078 log.go:172] (0xc0007da1e0) (1) Data frame handling\nI0826 23:27:22.721202    3078 log.go:172] (0xc0007da1e0) (1) Data frame sent\nI0826 23:27:22.721324    3078 log.go:172] (0xc000840b00) (0xc0007da1e0) Stream removed, broadcasting: 1\nI0826 23:27:22.721485    3078 log.go:172] (0xc000840b00) Go away received\nI0826 23:27:22.721609    3078 log.go:172] (0xc000840b00) (0xc0007da1e0) Stream removed, broadcasting: 1\nI0826 23:27:22.721628    3078 log.go:172] (0xc000840b00) (0xc000663ae0) Stream removed, broadcasting: 3\nI0826 23:27:22.721634    3078 log.go:172] (0xc000840b00) (0xc0007da280) Stream removed, broadcasting: 5\n"
Aug 26 23:27:22.733: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 26 23:27:22.733: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Aug 26 23:27:22.736: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Aug 26 23:27:32.741: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Aug 26 23:27:32.741: INFO: Waiting for statefulset status.replicas updated to 0
Aug 26 23:27:32.759: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Aug 26 23:27:32.759: INFO: ss-0  jerma-worker  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:27:12 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:27:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:27:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:27:12 +0000 UTC  }]
Aug 26 23:27:32.759: INFO: 
Aug 26 23:27:32.759: INFO: StatefulSet ss has not reached scale 3, at 1
Aug 26 23:27:33.764: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.991427436s
Aug 26 23:27:34.928: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.986410186s
Aug 26 23:27:35.933: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.822759145s
Aug 26 23:27:36.979: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.817723287s
Aug 26 23:27:37.984: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.771584916s
Aug 26 23:27:38.989: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.766516417s
Aug 26 23:27:39.994: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.761396932s
Aug 26 23:27:40.999: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.756791155s
Aug 26 23:27:42.003: INFO: Verifying statefulset ss doesn't scale past 3 for another 752.006691ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4491
Aug 26 23:27:43.033: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4491 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 23:27:43.237: INFO: stderr: "I0826 23:27:43.154340    3101 log.go:172] (0xc000bfa000) (0xc000a3c000) Create stream\nI0826 23:27:43.154412    3101 log.go:172] (0xc000bfa000) (0xc000a3c000) Stream added, broadcasting: 1\nI0826 23:27:43.156470    3101 log.go:172] (0xc000bfa000) Reply frame received for 1\nI0826 23:27:43.156497    3101 log.go:172] (0xc000bfa000) (0xc000a3c0a0) Create stream\nI0826 23:27:43.156505    3101 log.go:172] (0xc000bfa000) (0xc000a3c0a0) Stream added, broadcasting: 3\nI0826 23:27:43.157425    3101 log.go:172] (0xc000bfa000) Reply frame received for 3\nI0826 23:27:43.157458    3101 log.go:172] (0xc000bfa000) (0xc0004ab720) Create stream\nI0826 23:27:43.157467    3101 log.go:172] (0xc000bfa000) (0xc0004ab720) Stream added, broadcasting: 5\nI0826 23:27:43.158437    3101 log.go:172] (0xc000bfa000) Reply frame received for 5\nI0826 23:27:43.229328    3101 log.go:172] (0xc000bfa000) Data frame received for 5\nI0826 23:27:43.229369    3101 log.go:172] (0xc0004ab720) (5) Data frame handling\nI0826 23:27:43.229380    3101 log.go:172] (0xc0004ab720) (5) Data frame sent\nI0826 23:27:43.229389    3101 log.go:172] (0xc000bfa000) Data frame received for 5\nI0826 23:27:43.229398    3101 log.go:172] (0xc0004ab720) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0826 23:27:43.229419    3101 log.go:172] (0xc000bfa000) Data frame received for 3\nI0826 23:27:43.229432    3101 log.go:172] (0xc000a3c0a0) (3) Data frame handling\nI0826 23:27:43.229460    3101 log.go:172] (0xc000a3c0a0) (3) Data frame sent\nI0826 23:27:43.229488    3101 log.go:172] (0xc000bfa000) Data frame received for 3\nI0826 23:27:43.229502    3101 log.go:172] (0xc000a3c0a0) (3) Data frame handling\nI0826 23:27:43.230680    3101 log.go:172] (0xc000bfa000) Data frame received for 1\nI0826 23:27:43.230699    3101 log.go:172] (0xc000a3c000) (1) Data frame handling\nI0826 23:27:43.230723    3101 log.go:172] (0xc000a3c000) (1) Data frame sent\nI0826 23:27:43.230743    3101 log.go:172] (0xc000bfa000) (0xc000a3c000) Stream removed, broadcasting: 1\nI0826 23:27:43.230764    3101 log.go:172] (0xc000bfa000) Go away received\nI0826 23:27:43.231078    3101 log.go:172] (0xc000bfa000) (0xc000a3c000) Stream removed, broadcasting: 1\nI0826 23:27:43.231107    3101 log.go:172] (0xc000bfa000) (0xc000a3c0a0) Stream removed, broadcasting: 3\nI0826 23:27:43.231116    3101 log.go:172] (0xc000bfa000) (0xc0004ab720) Stream removed, broadcasting: 5\n"
Aug 26 23:27:43.237: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Aug 26 23:27:43.237: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Aug 26 23:27:43.237: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4491 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 23:27:43.435: INFO: stderr: "I0826 23:27:43.363976    3122 log.go:172] (0xc0001042c0) (0xc000510000) Create stream\nI0826 23:27:43.364042    3122 log.go:172] (0xc0001042c0) (0xc000510000) Stream added, broadcasting: 1\nI0826 23:27:43.366633    3122 log.go:172] (0xc0001042c0) Reply frame received for 1\nI0826 23:27:43.366663    3122 log.go:172] (0xc0001042c0) (0xc0006fda40) Create stream\nI0826 23:27:43.366670    3122 log.go:172] (0xc0001042c0) (0xc0006fda40) Stream added, broadcasting: 3\nI0826 23:27:43.367421    3122 log.go:172] (0xc0001042c0) Reply frame received for 3\nI0826 23:27:43.367450    3122 log.go:172] (0xc0001042c0) (0xc000510140) Create stream\nI0826 23:27:43.367459    3122 log.go:172] (0xc0001042c0) (0xc000510140) Stream added, broadcasting: 5\nI0826 23:27:43.368262    3122 log.go:172] (0xc0001042c0) Reply frame received for 5\nI0826 23:27:43.422818    3122 log.go:172] (0xc0001042c0) Data frame received for 3\nI0826 23:27:43.422856    3122 log.go:172] (0xc0006fda40) (3) Data frame handling\nI0826 23:27:43.422878    3122 log.go:172] (0xc0006fda40) (3) Data frame sent\nI0826 23:27:43.422889    3122 log.go:172] (0xc0001042c0) Data frame received for 3\nI0826 23:27:43.422900    3122 log.go:172] (0xc0006fda40) (3) Data frame handling\nI0826 23:27:43.423000    3122 log.go:172] (0xc0001042c0) Data frame received for 5\nI0826 23:27:43.423029    3122 log.go:172] (0xc000510140) (5) Data frame handling\nI0826 23:27:43.423055    3122 log.go:172] (0xc000510140) (5) Data frame sent\nI0826 23:27:43.423074    3122 log.go:172] (0xc0001042c0) Data frame received for 5\nI0826 23:27:43.423085    3122 log.go:172] (0xc000510140) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0826 23:27:43.424688    3122 log.go:172] (0xc0001042c0) Data frame received for 1\nI0826 23:27:43.424856    3122 log.go:172] (0xc000510000) (1) Data frame handling\nI0826 23:27:43.424900    3122 log.go:172] (0xc000510000) (1) Data frame sent\nI0826 23:27:43.424931    3122 log.go:172] (0xc0001042c0) (0xc000510000) Stream removed, broadcasting: 1\nI0826 23:27:43.424965    3122 log.go:172] (0xc0001042c0) Go away received\nI0826 23:27:43.425411    3122 log.go:172] (0xc0001042c0) (0xc000510000) Stream removed, broadcasting: 1\nI0826 23:27:43.425437    3122 log.go:172] (0xc0001042c0) (0xc0006fda40) Stream removed, broadcasting: 3\nI0826 23:27:43.425457    3122 log.go:172] (0xc0001042c0) (0xc000510140) Stream removed, broadcasting: 5\n"
Aug 26 23:27:43.435: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Aug 26 23:27:43.435: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Aug 26 23:27:43.435: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4491 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 23:27:43.658: INFO: stderr: "I0826 23:27:43.577919    3145 log.go:172] (0xc0000f6b00) (0xc0006c5f40) Create stream\nI0826 23:27:43.577991    3145 log.go:172] (0xc0000f6b00) (0xc0006c5f40) Stream added, broadcasting: 1\nI0826 23:27:43.580252    3145 log.go:172] (0xc0000f6b00) Reply frame received for 1\nI0826 23:27:43.580278    3145 log.go:172] (0xc0000f6b00) (0xc000682820) Create stream\nI0826 23:27:43.580286    3145 log.go:172] (0xc0000f6b00) (0xc000682820) Stream added, broadcasting: 3\nI0826 23:27:43.581038    3145 log.go:172] (0xc0000f6b00) Reply frame received for 3\nI0826 23:27:43.581076    3145 log.go:172] (0xc0000f6b00) (0xc0002235e0) Create stream\nI0826 23:27:43.581084    3145 log.go:172] (0xc0000f6b00) (0xc0002235e0) Stream added, broadcasting: 5\nI0826 23:27:43.581714    3145 log.go:172] (0xc0000f6b00) Reply frame received for 5\nI0826 23:27:43.646749    3145 log.go:172] (0xc0000f6b00) Data frame received for 3\nI0826 23:27:43.646790    3145 log.go:172] (0xc000682820) (3) Data frame handling\nI0826 23:27:43.646817    3145 log.go:172] (0xc000682820) (3) Data frame sent\nI0826 23:27:43.646828    3145 log.go:172] (0xc0000f6b00) Data frame received for 3\nI0826 23:27:43.646837    3145 log.go:172] (0xc000682820) (3) Data frame handling\nI0826 23:27:43.646879    3145 log.go:172] (0xc0000f6b00) Data frame received for 5\nI0826 23:27:43.646907    3145 log.go:172] (0xc0002235e0) (5) Data frame handling\nI0826 23:27:43.646934    3145 log.go:172] (0xc0002235e0) (5) Data frame sent\nI0826 23:27:43.646946    3145 log.go:172] (0xc0000f6b00) Data frame received for 5\nI0826 23:27:43.646955    3145 log.go:172] (0xc0002235e0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0826 23:27:43.648822    3145 log.go:172] (0xc0000f6b00) Data frame received for 1\nI0826 23:27:43.648841    3145 log.go:172] (0xc0006c5f40) (1) Data frame handling\nI0826 23:27:43.648853    3145 log.go:172] (0xc0006c5f40) (1) Data frame sent\nI0826 23:27:43.648869    3145 log.go:172] (0xc0000f6b00) (0xc0006c5f40) Stream removed, broadcasting: 1\nI0826 23:27:43.649192    3145 log.go:172] (0xc0000f6b00) (0xc0006c5f40) Stream removed, broadcasting: 1\nI0826 23:27:43.649211    3145 log.go:172] (0xc0000f6b00) (0xc000682820) Stream removed, broadcasting: 3\nI0826 23:27:43.649361    3145 log.go:172] (0xc0000f6b00) (0xc0002235e0) Stream removed, broadcasting: 5\n"
Aug 26 23:27:43.658: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Aug 26 23:27:43.658: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Aug 26 23:27:43.661: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 26 23:27:43.661: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 26 23:27:43.661: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Aug 26 23:27:43.663: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4491 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 26 23:27:43.888: INFO: stderr: "I0826 23:27:43.809445    3168 log.go:172] (0xc000627130) (0xc000a5a000) Create stream\nI0826 23:27:43.809508    3168 log.go:172] (0xc000627130) (0xc000a5a000) Stream added, broadcasting: 1\nI0826 23:27:43.812287    3168 log.go:172] (0xc000627130) Reply frame received for 1\nI0826 23:27:43.812317    3168 log.go:172] (0xc000627130) (0xc000667b80) Create stream\nI0826 23:27:43.812329    3168 log.go:172] (0xc000627130) (0xc000667b80) Stream added, broadcasting: 3\nI0826 23:27:43.813274    3168 log.go:172] (0xc000627130) Reply frame received for 3\nI0826 23:27:43.813319    3168 log.go:172] (0xc000627130) (0xc000a5a0a0) Create stream\nI0826 23:27:43.813335    3168 log.go:172] (0xc000627130) (0xc000a5a0a0) Stream added, broadcasting: 5\nI0826 23:27:43.814107    3168 log.go:172] (0xc000627130) Reply frame received for 5\nI0826 23:27:43.879212    3168 log.go:172] (0xc000627130) Data frame received for 5\nI0826 23:27:43.879248    3168 log.go:172] (0xc000a5a0a0) (5) Data frame handling\nI0826 23:27:43.879256    3168 log.go:172] (0xc000a5a0a0) (5) Data frame sent\nI0826 23:27:43.879261    3168 log.go:172] (0xc000627130) Data frame received for 5\nI0826 23:27:43.879265    3168 log.go:172] (0xc000a5a0a0) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0826 23:27:43.879283    3168 log.go:172] (0xc000627130) Data frame received for 3\nI0826 23:27:43.879289    3168 log.go:172] (0xc000667b80) (3) Data frame handling\nI0826 23:27:43.879295    3168 log.go:172] (0xc000667b80) (3) Data frame sent\nI0826 23:27:43.879300    3168 log.go:172] (0xc000627130) Data frame received for 3\nI0826 23:27:43.879303    3168 log.go:172] (0xc000667b80) (3) Data frame handling\nI0826 23:27:43.881362    3168 log.go:172] (0xc000627130) Data frame received for 1\nI0826 23:27:43.881385    3168 log.go:172] (0xc000a5a000) (1) Data frame handling\nI0826 23:27:43.881403    3168 log.go:172] (0xc000a5a000) (1) Data frame sent\nI0826 23:27:43.881421    3168 log.go:172] (0xc000627130) (0xc000a5a000) Stream removed, broadcasting: 1\nI0826 23:27:43.881434    3168 log.go:172] (0xc000627130) Go away received\nI0826 23:27:43.881791    3168 log.go:172] (0xc000627130) (0xc000a5a000) Stream removed, broadcasting: 1\nI0826 23:27:43.881807    3168 log.go:172] (0xc000627130) (0xc000667b80) Stream removed, broadcasting: 3\nI0826 23:27:43.881815    3168 log.go:172] (0xc000627130) (0xc000a5a0a0) Stream removed, broadcasting: 5\n"
Aug 26 23:27:43.888: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 26 23:27:43.888: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Aug 26 23:27:43.888: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4491 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 26 23:27:44.142: INFO: stderr: "I0826 23:27:44.023198    3190 log.go:172] (0xc0001042c0) (0xc00066c6e0) Create stream\nI0826 23:27:44.023270    3190 log.go:172] (0xc0001042c0) (0xc00066c6e0) Stream added, broadcasting: 1\nI0826 23:27:44.025117    3190 log.go:172] (0xc0001042c0) Reply frame received for 1\nI0826 23:27:44.025159    3190 log.go:172] (0xc0001042c0) (0xc0004754a0) Create stream\nI0826 23:27:44.025172    3190 log.go:172] (0xc0001042c0) (0xc0004754a0) Stream added, broadcasting: 3\nI0826 23:27:44.025906    3190 log.go:172] (0xc0001042c0) Reply frame received for 3\nI0826 23:27:44.025932    3190 log.go:172] (0xc0001042c0) (0xc000705ae0) Create stream\nI0826 23:27:44.025942    3190 log.go:172] (0xc0001042c0) (0xc000705ae0) Stream added, broadcasting: 5\nI0826 23:27:44.026680    3190 log.go:172] (0xc0001042c0) Reply frame received for 5\nI0826 23:27:44.087172    3190 log.go:172] (0xc0001042c0) Data frame received for 5\nI0826 23:27:44.087197    3190 log.go:172] (0xc000705ae0) (5) Data frame handling\nI0826 23:27:44.087210    3190 log.go:172] (0xc000705ae0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0826 23:27:44.127393    3190 log.go:172] (0xc0001042c0) Data frame received for 5\nI0826 23:27:44.127444    3190 log.go:172] (0xc000705ae0) (5) Data frame handling\nI0826 23:27:44.127482    3190 log.go:172] (0xc0001042c0) Data frame received for 3\nI0826 23:27:44.127524    3190 log.go:172] (0xc0004754a0) (3) Data frame handling\nI0826 23:27:44.127561    3190 log.go:172] (0xc0004754a0) (3) Data frame sent\nI0826 23:27:44.127585    3190 log.go:172] (0xc0001042c0) Data frame received for 3\nI0826 23:27:44.127607    3190 log.go:172] (0xc0004754a0) (3) Data frame handling\nI0826 23:27:44.129482    3190 log.go:172] (0xc0001042c0) Data frame received for 1\nI0826 23:27:44.129513    3190 log.go:172] (0xc00066c6e0) (1) Data frame handling\nI0826 23:27:44.129548    3190 log.go:172] (0xc00066c6e0) (1) Data frame sent\nI0826 23:27:44.129577    3190 log.go:172] (0xc0001042c0) (0xc00066c6e0) Stream removed, broadcasting: 1\nI0826 23:27:44.129605    3190 log.go:172] (0xc0001042c0) Go away received\nI0826 23:27:44.130043    3190 log.go:172] (0xc0001042c0) (0xc00066c6e0) Stream removed, broadcasting: 1\nI0826 23:27:44.130171    3190 log.go:172] (0xc0001042c0) (0xc0004754a0) Stream removed, broadcasting: 3\nI0826 23:27:44.130194    3190 log.go:172] (0xc0001042c0) (0xc000705ae0) Stream removed, broadcasting: 5\n"
Aug 26 23:27:44.142: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 26 23:27:44.142: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Aug 26 23:27:44.142: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4491 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 26 23:27:44.425: INFO: stderr: "I0826 23:27:44.265874    3211 log.go:172] (0xc0005ea210) (0xc0006c9900) Create stream\nI0826 23:27:44.265942    3211 log.go:172] (0xc0005ea210) (0xc0006c9900) Stream added, broadcasting: 1\nI0826 23:27:44.268476    3211 log.go:172] (0xc0005ea210) Reply frame received for 1\nI0826 23:27:44.268510    3211 log.go:172] (0xc0005ea210) (0xc000a5a000) Create stream\nI0826 23:27:44.268523    3211 log.go:172] (0xc0005ea210) (0xc000a5a000) Stream added, broadcasting: 3\nI0826 23:27:44.269417    3211 log.go:172] (0xc0005ea210) Reply frame received for 3\nI0826 23:27:44.269444    3211 log.go:172] (0xc0005ea210) (0xc000a5a0a0) Create stream\nI0826 23:27:44.269452    3211 log.go:172] (0xc0005ea210) (0xc000a5a0a0) Stream added, broadcasting: 5\nI0826 23:27:44.270176    3211 log.go:172] (0xc0005ea210) Reply frame received for 5\nI0826 23:27:44.339591    3211 log.go:172] (0xc0005ea210) Data frame received for 5\nI0826 23:27:44.339628    3211 log.go:172] (0xc000a5a0a0) (5) Data frame handling\nI0826 23:27:44.339645    3211 log.go:172] (0xc000a5a0a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0826 23:27:44.412533    3211 log.go:172] (0xc0005ea210) Data frame received for 3\nI0826 23:27:44.412571    3211 log.go:172] (0xc000a5a000) (3) Data frame handling\nI0826 23:27:44.412588    3211 log.go:172] (0xc000a5a000) (3) Data frame sent\nI0826 23:27:44.414968    3211 log.go:172] (0xc0005ea210) Data frame received for 5\nI0826 23:27:44.414996    3211 log.go:172] (0xc000a5a0a0) (5) Data frame handling\nI0826 23:27:44.415031    3211 log.go:172] (0xc0005ea210) Data frame received for 3\nI0826 23:27:44.415054    3211 log.go:172] (0xc000a5a000) (3) Data frame handling\nI0826 23:27:44.416402    3211 log.go:172] (0xc0005ea210) Data frame received for 1\nI0826 23:27:44.416438    3211 log.go:172] (0xc0006c9900) (1) Data frame handling\nI0826 23:27:44.416460    3211 log.go:172] (0xc0006c9900) (1) Data frame sent\nI0826 23:27:44.416490    3211 log.go:172] (0xc0005ea210) (0xc0006c9900) Stream removed, broadcasting: 1\nI0826 23:27:44.416517    3211 log.go:172] (0xc0005ea210) Go away received\nI0826 23:27:44.416954    3211 log.go:172] (0xc0005ea210) (0xc0006c9900) Stream removed, broadcasting: 1\nI0826 23:27:44.416988    3211 log.go:172] (0xc0005ea210) (0xc000a5a000) Stream removed, broadcasting: 3\nI0826 23:27:44.417009    3211 log.go:172] (0xc0005ea210) (0xc000a5a0a0) Stream removed, broadcasting: 5\n"
Aug 26 23:27:44.426: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 26 23:27:44.426: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Aug 26 23:27:44.426: INFO: Waiting for statefulset status.replicas updated to 0
Aug 26 23:27:44.428: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3
Aug 26 23:27:54.435: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Aug 26 23:27:54.435: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Aug 26 23:27:54.435: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Aug 26 23:27:54.449: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug 26 23:27:54.449: INFO: ss-0  jerma-worker   Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:27:12 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:27:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:27:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:27:12 +0000 UTC  }]
Aug 26 23:27:54.449: INFO: ss-1  jerma-worker2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:27:32 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:27:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:27:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:27:32 +0000 UTC  }]
Aug 26 23:27:54.449: INFO: ss-2  jerma-worker2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:27:32 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:27:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:27:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:27:32 +0000 UTC  }]
Aug 26 23:27:54.449: INFO: 
Aug 26 23:27:54.449: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 26 23:27:55.453: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug 26 23:27:55.453: INFO: ss-0  jerma-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:27:12 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:27:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:27:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:27:12 +0000 UTC  }]
Aug 26 23:27:55.453: INFO: ss-1  jerma-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:27:32 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:27:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:27:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:27:32 +0000 UTC  }]
Aug 26 23:27:55.453: INFO: ss-2  jerma-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:27:32 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:27:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:27:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:27:32 +0000 UTC  }]
Aug 26 23:27:55.453: INFO: 
Aug 26 23:27:55.453: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 26 23:27:56.590: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug 26 23:27:56.591: INFO: ss-0  jerma-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:27:12 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:27:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:27:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:27:12 +0000 UTC  }]
Aug 26 23:27:56.591: INFO: ss-1  jerma-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:27:32 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:27:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:27:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:27:32 +0000 UTC  }]
Aug 26 23:27:56.591: INFO: ss-2  jerma-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:27:32 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:27:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:27:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:27:32 +0000 UTC  }]
Aug 26 23:27:56.591: INFO: 
Aug 26 23:27:56.591: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 26 23:27:57.595: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug 26 23:27:57.595: INFO: ss-0  jerma-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:27:12 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:27:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:27:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:27:12 +0000 UTC  }]
Aug 26 23:27:57.595: INFO: ss-1  jerma-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:27:32 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:27:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:27:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:27:32 +0000 UTC  }]
Aug 26 23:27:57.595: INFO: ss-2  jerma-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:27:32 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:27:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:27:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:27:32 +0000 UTC  }]
Aug 26 23:27:57.595: INFO: 
Aug 26 23:27:57.595: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 26 23:27:58.600: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug 26 23:27:58.600: INFO: ss-0  jerma-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:27:12 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:27:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:27:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:27:12 +0000 UTC  }]
Aug 26 23:27:58.600: INFO: ss-1  jerma-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:27:32 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:27:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:27:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:27:32 +0000 UTC  }]
Aug 26 23:27:58.600: INFO: 
Aug 26 23:27:58.600: INFO: StatefulSet ss has not reached scale 0, at 2
Aug 26 23:27:59.605: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug 26 23:27:59.605: INFO: ss-0  jerma-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:27:12 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:27:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:27:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:27:12 +0000 UTC  }]
Aug 26 23:27:59.605: INFO: ss-1  jerma-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:27:32 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:27:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:27:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:27:32 +0000 UTC  }]
Aug 26 23:27:59.605: INFO: 
Aug 26 23:27:59.605: INFO: StatefulSet ss has not reached scale 0, at 2
Aug 26 23:28:00.608: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug 26 23:28:00.608: INFO: ss-0  jerma-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:27:12 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:27:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:27:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:27:12 +0000 UTC  }]
Aug 26 23:28:00.608: INFO: ss-1  jerma-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:27:32 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:27:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:27:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:27:32 +0000 UTC  }]
Aug 26 23:28:00.608: INFO: 
Aug 26 23:28:00.608: INFO: StatefulSet ss has not reached scale 0, at 2
Aug 26 23:28:01.612: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug 26 23:28:01.613: INFO: ss-1  jerma-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:27:32 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:27:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:27:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 23:27:32 +0000 UTC  }]
Aug 26 23:28:01.613: INFO: 
Aug 26 23:28:01.613: INFO: StatefulSet ss has not reached scale 0, at 1
Aug 26 23:28:02.733: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.829479948s
Aug 26 23:28:03.738: INFO: Verifying statefulset ss doesn't scale past 0 for another 708.519633ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-4491
Aug 26 23:28:04.743: INFO: Scaling statefulset ss to 0
Aug 26 23:28:04.750: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Aug 26 23:28:04.752: INFO: Deleting all statefulset in ns statefulset-4491
Aug 26 23:28:04.754: INFO: Scaling statefulset ss to 0
Aug 26 23:28:04.771: INFO: Waiting for statefulset status.replicas updated to 0
Aug 26 23:28:04.773: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:28:04.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-4491" for this suite.

• [SLOW TEST:52.400 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":278,"completed":178,"skipped":2805,"failed":0}
SSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:28:04.793: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0644 on node default medium
Aug 26 23:28:04.888: INFO: Waiting up to 5m0s for pod "pod-5c69ec5b-b311-4bf2-9cbe-6c5547b1ccfb" in namespace "emptydir-6410" to be "success or failure"
Aug 26 23:28:04.905: INFO: Pod "pod-5c69ec5b-b311-4bf2-9cbe-6c5547b1ccfb": Phase="Pending", Reason="", readiness=false. Elapsed: 17.748616ms
Aug 26 23:28:07.117: INFO: Pod "pod-5c69ec5b-b311-4bf2-9cbe-6c5547b1ccfb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.229744148s
Aug 26 23:28:09.225: INFO: Pod "pod-5c69ec5b-b311-4bf2-9cbe-6c5547b1ccfb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.337586218s
STEP: Saw pod success
Aug 26 23:28:09.225: INFO: Pod "pod-5c69ec5b-b311-4bf2-9cbe-6c5547b1ccfb" satisfied condition "success or failure"
Aug 26 23:28:09.228: INFO: Trying to get logs from node jerma-worker pod pod-5c69ec5b-b311-4bf2-9cbe-6c5547b1ccfb container test-container: 
STEP: delete the pod
Aug 26 23:28:09.477: INFO: Waiting for pod pod-5c69ec5b-b311-4bf2-9cbe-6c5547b1ccfb to disappear
Aug 26 23:28:09.483: INFO: Pod pod-5c69ec5b-b311-4bf2-9cbe-6c5547b1ccfb no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:28:09.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6410" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":179,"skipped":2809,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing mutating webhooks should work [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:28:09.496: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 26 23:28:10.364: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 26 23:28:12.450: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734081290, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734081290, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734081290, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734081290, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 23:28:14.454: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734081290, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734081290, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734081290, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734081290, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 26 23:28:17.543: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] listing mutating webhooks should work [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Listing all of the created validation webhooks
STEP: Creating a configMap that should be mutated
STEP: Deleting the collection of validation webhooks
STEP: Creating a configMap that should not be mutated
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:28:18.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4740" for this suite.
STEP: Destroying namespace "webhook-4740-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:8.656 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing mutating webhooks should work [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":278,"completed":180,"skipped":2829,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicaSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:28:18.153: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Aug 26 23:28:23.247: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:28:24.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-1201" for this suite.

• [SLOW TEST:6.131 seconds]
[sig-apps] ReplicaSet
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":278,"completed":181,"skipped":2856,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:28:24.285: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Aug 26 23:28:24.544: INFO: Waiting up to 5m0s for pod "downward-api-f8bea7d5-6779-4a70-8814-e0835e9ba9e1" in namespace "downward-api-9536" to be "success or failure"
Aug 26 23:28:24.557: INFO: Pod "downward-api-f8bea7d5-6779-4a70-8814-e0835e9ba9e1": Phase="Pending", Reason="", readiness=false. Elapsed: 13.078553ms
Aug 26 23:28:26.561: INFO: Pod "downward-api-f8bea7d5-6779-4a70-8814-e0835e9ba9e1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016557088s
Aug 26 23:28:28.565: INFO: Pod "downward-api-f8bea7d5-6779-4a70-8814-e0835e9ba9e1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.020640605s
Aug 26 23:28:30.568: INFO: Pod "downward-api-f8bea7d5-6779-4a70-8814-e0835e9ba9e1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.023548507s
STEP: Saw pod success
Aug 26 23:28:30.568: INFO: Pod "downward-api-f8bea7d5-6779-4a70-8814-e0835e9ba9e1" satisfied condition "success or failure"
Aug 26 23:28:30.570: INFO: Trying to get logs from node jerma-worker pod downward-api-f8bea7d5-6779-4a70-8814-e0835e9ba9e1 container dapi-container: 
STEP: delete the pod
Aug 26 23:28:30.600: INFO: Waiting for pod downward-api-f8bea7d5-6779-4a70-8814-e0835e9ba9e1 to disappear
Aug 26 23:28:30.622: INFO: Pod downward-api-f8bea7d5-6779-4a70-8814-e0835e9ba9e1 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:28:30.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9536" for this suite.

• [SLOW TEST:6.344 seconds]
[sig-node] Downward API
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":278,"completed":182,"skipped":2891,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Subpath
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:28:30.630: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod pod-subpath-test-configmap-qk9v
STEP: Creating a pod to test atomic-volume-subpath
Aug 26 23:28:30.882: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-qk9v" in namespace "subpath-5747" to be "success or failure"
Aug 26 23:28:30.887: INFO: Pod "pod-subpath-test-configmap-qk9v": Phase="Pending", Reason="", readiness=false. Elapsed: 4.217357ms
Aug 26 23:28:33.069: INFO: Pod "pod-subpath-test-configmap-qk9v": Phase="Pending", Reason="", readiness=false. Elapsed: 2.186644295s
Aug 26 23:28:35.073: INFO: Pod "pod-subpath-test-configmap-qk9v": Phase="Running", Reason="", readiness=true. Elapsed: 4.190683575s
Aug 26 23:28:37.077: INFO: Pod "pod-subpath-test-configmap-qk9v": Phase="Running", Reason="", readiness=true. Elapsed: 6.194626303s
Aug 26 23:28:39.081: INFO: Pod "pod-subpath-test-configmap-qk9v": Phase="Running", Reason="", readiness=true. Elapsed: 8.198154228s
Aug 26 23:28:41.085: INFO: Pod "pod-subpath-test-configmap-qk9v": Phase="Running", Reason="", readiness=true. Elapsed: 10.202147374s
Aug 26 23:28:43.088: INFO: Pod "pod-subpath-test-configmap-qk9v": Phase="Running", Reason="", readiness=true. Elapsed: 12.205325568s
Aug 26 23:28:45.091: INFO: Pod "pod-subpath-test-configmap-qk9v": Phase="Running", Reason="", readiness=true. Elapsed: 14.208812627s
Aug 26 23:28:47.095: INFO: Pod "pod-subpath-test-configmap-qk9v": Phase="Running", Reason="", readiness=true. Elapsed: 16.212088599s
Aug 26 23:28:49.098: INFO: Pod "pod-subpath-test-configmap-qk9v": Phase="Running", Reason="", readiness=true. Elapsed: 18.215042358s
Aug 26 23:28:51.159: INFO: Pod "pod-subpath-test-configmap-qk9v": Phase="Running", Reason="", readiness=true. Elapsed: 20.276424342s
Aug 26 23:28:53.165: INFO: Pod "pod-subpath-test-configmap-qk9v": Phase="Running", Reason="", readiness=true. Elapsed: 22.282101625s
Aug 26 23:28:55.170: INFO: Pod "pod-subpath-test-configmap-qk9v": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.287906422s
STEP: Saw pod success
Aug 26 23:28:55.170: INFO: Pod "pod-subpath-test-configmap-qk9v" satisfied condition "success or failure"
Aug 26 23:28:55.173: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-configmap-qk9v container test-container-subpath-configmap-qk9v: 
STEP: delete the pod
Aug 26 23:28:55.192: INFO: Waiting for pod pod-subpath-test-configmap-qk9v to disappear
Aug 26 23:28:55.293: INFO: Pod pod-subpath-test-configmap-qk9v no longer exists
STEP: Deleting pod pod-subpath-test-configmap-qk9v
Aug 26 23:28:55.293: INFO: Deleting pod "pod-subpath-test-configmap-qk9v" in namespace "subpath-5747"
[AfterEach] [sig-storage] Subpath
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:28:55.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-5747" for this suite.

• [SLOW TEST:24.824 seconds]
[sig-storage] Subpath
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":278,"completed":183,"skipped":2901,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run rc 
  should create an rc from an image [Deprecated] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:28:55.454: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Kubectl run rc
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1526
[It] should create an rc from an image [Deprecated] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Aug 26 23:28:55.608: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-1060'
Aug 26 23:28:58.923: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Aug 26 23:28:58.924: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n"
STEP: verifying the rc e2e-test-httpd-rc was created
STEP: verifying the pod controlled by rc e2e-test-httpd-rc was created
STEP: confirm that you can get logs from an rc
Aug 26 23:28:59.030: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-httpd-rc-mk6w8]
Aug 26 23:28:59.030: INFO: Waiting up to 5m0s for pod "e2e-test-httpd-rc-mk6w8" in namespace "kubectl-1060" to be "running and ready"
Aug 26 23:28:59.101: INFO: Pod "e2e-test-httpd-rc-mk6w8": Phase="Pending", Reason="", readiness=false. Elapsed: 71.203003ms
Aug 26 23:29:01.291: INFO: Pod "e2e-test-httpd-rc-mk6w8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.26140421s
Aug 26 23:29:03.295: INFO: Pod "e2e-test-httpd-rc-mk6w8": Phase="Running", Reason="", readiness=true. Elapsed: 4.265117948s
Aug 26 23:29:03.295: INFO: Pod "e2e-test-httpd-rc-mk6w8" satisfied condition "running and ready"
Aug 26 23:29:03.295: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-httpd-rc-mk6w8]
Aug 26 23:29:03.295: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-httpd-rc --namespace=kubectl-1060'
Aug 26 23:29:03.461: INFO: stderr: ""
Aug 26 23:29:03.461: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.2.30. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.2.30. Set the 'ServerName' directive globally to suppress this message\n[Wed Aug 26 23:29:02.163494 2020] [mpm_event:notice] [pid 1:tid 139717100788584] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Wed Aug 26 23:29:02.163554 2020] [core:notice] [pid 1:tid 139717100788584] AH00094: Command line: 'httpd -D FOREGROUND'\n"
[AfterEach] Kubectl run rc
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1531
Aug 26 23:29:03.461: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-1060'
Aug 26 23:29:03.571: INFO: stderr: ""
Aug 26 23:29:03.571: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:29:03.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1060" for this suite.

• [SLOW TEST:8.122 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run rc
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522
    should create an rc from an image [Deprecated] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Deprecated] [Conformance]","total":278,"completed":184,"skipped":2920,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Guestbook application 
  should create and stop a working application  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:29:03.576: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should create and stop a working application  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating all guestbook components
Aug 26 23:29:04.089: INFO: apiVersion: v1
kind: Service
metadata:
  name: agnhost-slave
  labels:
    app: agnhost
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: agnhost
    role: slave
    tier: backend

Aug 26 23:29:04.089: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-279'
Aug 26 23:29:04.531: INFO: stderr: ""
Aug 26 23:29:04.531: INFO: stdout: "service/agnhost-slave created\n"
Aug 26 23:29:04.531: INFO: apiVersion: v1
kind: Service
metadata:
  name: agnhost-master
  labels:
    app: agnhost
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: agnhost
    role: master
    tier: backend

Aug 26 23:29:04.531: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-279'
Aug 26 23:29:04.889: INFO: stderr: ""
Aug 26 23:29:04.889: INFO: stdout: "service/agnhost-master created\n"
Aug 26 23:29:04.889: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Aug 26 23:29:04.889: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-279'
Aug 26 23:29:05.403: INFO: stderr: ""
Aug 26 23:29:05.403: INFO: stdout: "service/frontend created\n"
Aug 26 23:29:05.403: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: guestbook
      tier: frontend
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: guestbook-frontend
        image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8
        args: [ "guestbook", "--backend-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 80

Aug 26 23:29:05.403: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-279'
Aug 26 23:29:05.803: INFO: stderr: ""
Aug 26 23:29:05.803: INFO: stdout: "deployment.apps/frontend created\n"
Aug 26 23:29:05.803: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: agnhost-master
spec:
  replicas: 1
  selector:
    matchLabels:
      app: agnhost
      role: master
      tier: backend
  template:
    metadata:
      labels:
        app: agnhost
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8
        args: [ "guestbook", "--http-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Aug 26 23:29:05.803: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-279'
Aug 26 23:29:06.137: INFO: stderr: ""
Aug 26 23:29:06.137: INFO: stdout: "deployment.apps/agnhost-master created\n"
Aug 26 23:29:06.137: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: agnhost-slave
spec:
  replicas: 2
  selector:
    matchLabels:
      app: agnhost
      role: slave
      tier: backend
  template:
    metadata:
      labels:
        app: agnhost
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8
        args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Aug 26 23:29:06.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-279'
Aug 26 23:29:06.442: INFO: stderr: ""
Aug 26 23:29:06.442: INFO: stdout: "deployment.apps/agnhost-slave created\n"
STEP: validating guestbook app
Aug 26 23:29:06.442: INFO: Waiting for all frontend pods to be Running.
Aug 26 23:29:16.492: INFO: Waiting for frontend to serve content.
Aug 26 23:29:16.501: INFO: Trying to add a new entry to the guestbook.
Aug 26 23:29:16.513: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
Aug 26 23:29:16.519: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-279'
Aug 26 23:29:16.700: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 26 23:29:16.700: INFO: stdout: "service \"agnhost-slave\" force deleted\n"
STEP: using delete to clean up resources
Aug 26 23:29:16.701: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-279'
Aug 26 23:29:16.874: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 26 23:29:16.874: INFO: stdout: "service \"agnhost-master\" force deleted\n"
STEP: using delete to clean up resources
Aug 26 23:29:16.875: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-279'
Aug 26 23:29:17.005: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 26 23:29:17.005: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Aug 26 23:29:17.005: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-279'
Aug 26 23:29:17.152: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 26 23:29:17.152: INFO: stdout: "deployment.apps \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Aug 26 23:29:17.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-279'
Aug 26 23:29:17.287: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 26 23:29:17.287: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n"
STEP: using delete to clean up resources
Aug 26 23:29:17.288: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-279'
Aug 26 23:29:17.703: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 26 23:29:17.703: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:29:17.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-279" for this suite.

• [SLOW TEST:14.146 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Guestbook application
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:381
    should create and stop a working application  [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","total":278,"completed":185,"skipped":2936,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Networking
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:29:17.723: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Performing setup for networking test in namespace pod-network-test-4825
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Aug 26 23:29:18.605: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Aug 26 23:29:43.343: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.108:8080/dial?request=hostname&protocol=http&host=10.244.2.33&port=8080&tries=1'] Namespace:pod-network-test-4825 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 26 23:29:43.343: INFO: >>> kubeConfig: /root/.kube/config
I0826 23:29:43.379357       6 log.go:172] (0xc00416c210) (0xc0021a7040) Create stream
I0826 23:29:43.379385       6 log.go:172] (0xc00416c210) (0xc0021a7040) Stream added, broadcasting: 1
I0826 23:29:43.381377       6 log.go:172] (0xc00416c210) Reply frame received for 1
I0826 23:29:43.381439       6 log.go:172] (0xc00416c210) (0xc002562c80) Create stream
I0826 23:29:43.381464       6 log.go:172] (0xc00416c210) (0xc002562c80) Stream added, broadcasting: 3
I0826 23:29:43.382628       6 log.go:172] (0xc00416c210) Reply frame received for 3
I0826 23:29:43.382672       6 log.go:172] (0xc00416c210) (0xc0021a70e0) Create stream
I0826 23:29:43.382689       6 log.go:172] (0xc00416c210) (0xc0021a70e0) Stream added, broadcasting: 5
I0826 23:29:43.383658       6 log.go:172] (0xc00416c210) Reply frame received for 5
I0826 23:29:43.462901       6 log.go:172] (0xc00416c210) Data frame received for 3
I0826 23:29:43.462925       6 log.go:172] (0xc002562c80) (3) Data frame handling
I0826 23:29:43.462940       6 log.go:172] (0xc002562c80) (3) Data frame sent
I0826 23:29:43.463529       6 log.go:172] (0xc00416c210) Data frame received for 5
I0826 23:29:43.463541       6 log.go:172] (0xc0021a70e0) (5) Data frame handling
I0826 23:29:43.463566       6 log.go:172] (0xc00416c210) Data frame received for 3
I0826 23:29:43.463587       6 log.go:172] (0xc002562c80) (3) Data frame handling
I0826 23:29:43.465079       6 log.go:172] (0xc00416c210) Data frame received for 1
I0826 23:29:43.465098       6 log.go:172] (0xc0021a7040) (1) Data frame handling
I0826 23:29:43.465106       6 log.go:172] (0xc0021a7040) (1) Data frame sent
I0826 23:29:43.465125       6 log.go:172] (0xc00416c210) (0xc0021a7040) Stream removed, broadcasting: 1
I0826 23:29:43.465136       6 log.go:172] (0xc00416c210) Go away received
I0826 23:29:43.465210       6 log.go:172] (0xc00416c210) (0xc0021a7040) Stream removed, broadcasting: 1
I0826 23:29:43.465235       6 log.go:172] (0xc00416c210) (0xc002562c80) Stream removed, broadcasting: 3
I0826 23:29:43.465244       6 log.go:172] (0xc00416c210) (0xc0021a70e0) Stream removed, broadcasting: 5
Aug 26 23:29:43.465: INFO: Waiting for responses: map[]
Aug 26 23:29:43.467: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.108:8080/dial?request=hostname&protocol=http&host=10.244.1.107&port=8080&tries=1'] Namespace:pod-network-test-4825 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 26 23:29:43.467: INFO: >>> kubeConfig: /root/.kube/config
I0826 23:29:43.496634       6 log.go:172] (0xc00167e9a0) (0xc0014cac80) Create stream
I0826 23:29:43.496653       6 log.go:172] (0xc00167e9a0) (0xc0014cac80) Stream added, broadcasting: 1
I0826 23:29:43.506973       6 log.go:172] (0xc00167e9a0) Reply frame received for 1
I0826 23:29:43.507035       6 log.go:172] (0xc00167e9a0) (0xc001e463c0) Create stream
I0826 23:29:43.507057       6 log.go:172] (0xc00167e9a0) (0xc001e463c0) Stream added, broadcasting: 3
I0826 23:29:43.510351       6 log.go:172] (0xc00167e9a0) Reply frame received for 3
I0826 23:29:43.510424       6 log.go:172] (0xc00167e9a0) (0xc0021a7220) Create stream
I0826 23:29:43.510458       6 log.go:172] (0xc00167e9a0) (0xc0021a7220) Stream added, broadcasting: 5
I0826 23:29:43.512131       6 log.go:172] (0xc00167e9a0) Reply frame received for 5
I0826 23:29:43.594225       6 log.go:172] (0xc00167e9a0) Data frame received for 3
I0826 23:29:43.594253       6 log.go:172] (0xc001e463c0) (3) Data frame handling
I0826 23:29:43.594271       6 log.go:172] (0xc001e463c0) (3) Data frame sent
I0826 23:29:43.594464       6 log.go:172] (0xc00167e9a0) Data frame received for 5
I0826 23:29:43.594490       6 log.go:172] (0xc0021a7220) (5) Data frame handling
I0826 23:29:43.594511       6 log.go:172] (0xc00167e9a0) Data frame received for 3
I0826 23:29:43.594521       6 log.go:172] (0xc001e463c0) (3) Data frame handling
I0826 23:29:43.595808       6 log.go:172] (0xc00167e9a0) Data frame received for 1
I0826 23:29:43.595840       6 log.go:172] (0xc0014cac80) (1) Data frame handling
I0826 23:29:43.595860       6 log.go:172] (0xc0014cac80) (1) Data frame sent
I0826 23:29:43.595874       6 log.go:172] (0xc00167e9a0) (0xc0014cac80) Stream removed, broadcasting: 1
I0826 23:29:43.595961       6 log.go:172] (0xc00167e9a0) (0xc0014cac80) Stream removed, broadcasting: 1
I0826 23:29:43.595974       6 log.go:172] (0xc00167e9a0) (0xc001e463c0) Stream removed, broadcasting: 3
I0826 23:29:43.595992       6 log.go:172] (0xc00167e9a0) (0xc0021a7220) Stream removed, broadcasting: 5
Aug 26 23:29:43.596: INFO: Waiting for responses: map[]
I0826 23:29:43.596061       6 log.go:172] (0xc00167e9a0) Go away received
[AfterEach] [sig-network] Networking
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:29:43.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-4825" for this suite.

• [SLOW TEST:25.881 seconds]
[sig-network] Networking
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":186,"skipped":2985,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:29:43.604: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0826 23:29:53.785748       6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug 26 23:29:53.785: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:29:53.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-9161" for this suite.

• [SLOW TEST:10.189 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete pods created by rc when not orphaning [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":278,"completed":187,"skipped":3028,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:29:53.794: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 26 23:29:53.893: INFO: Waiting up to 5m0s for pod "downwardapi-volume-025742e1-0a1c-463e-8d83-517712a43a96" in namespace "projected-5900" to be "success or failure"
Aug 26 23:29:53.896: INFO: Pod "downwardapi-volume-025742e1-0a1c-463e-8d83-517712a43a96": Phase="Pending", Reason="", readiness=false. Elapsed: 2.928778ms
Aug 26 23:29:56.016: INFO: Pod "downwardapi-volume-025742e1-0a1c-463e-8d83-517712a43a96": Phase="Pending", Reason="", readiness=false. Elapsed: 2.12276331s
Aug 26 23:29:58.020: INFO: Pod "downwardapi-volume-025742e1-0a1c-463e-8d83-517712a43a96": Phase="Running", Reason="", readiness=true. Elapsed: 4.126752012s
Aug 26 23:30:00.025: INFO: Pod "downwardapi-volume-025742e1-0a1c-463e-8d83-517712a43a96": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.131138918s
STEP: Saw pod success
Aug 26 23:30:00.025: INFO: Pod "downwardapi-volume-025742e1-0a1c-463e-8d83-517712a43a96" satisfied condition "success or failure"
Aug 26 23:30:00.027: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-025742e1-0a1c-463e-8d83-517712a43a96 container client-container: 
STEP: delete the pod
Aug 26 23:30:00.053: INFO: Waiting for pod downwardapi-volume-025742e1-0a1c-463e-8d83-517712a43a96 to disappear
Aug 26 23:30:00.057: INFO: Pod downwardapi-volume-025742e1-0a1c-463e-8d83-517712a43a96 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:30:00.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5900" for this suite.

• [SLOW TEST:6.270 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":188,"skipped":3052,"failed":0}
SSSSSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:30:00.064: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should serve a basic endpoint from pods  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating service endpoint-test2 in namespace services-6785
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6785 to expose endpoints map[]
Aug 26 23:30:00.241: INFO: Get endpoints failed (3.324069ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Aug 26 23:30:01.244: INFO: successfully validated that service endpoint-test2 in namespace services-6785 exposes endpoints map[] (1.006843668s elapsed)
STEP: Creating pod pod1 in namespace services-6785
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6785 to expose endpoints map[pod1:[80]]
Aug 26 23:30:05.539: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.288363365s elapsed, will retry)
Aug 26 23:30:07.733: INFO: successfully validated that service endpoint-test2 in namespace services-6785 exposes endpoints map[pod1:[80]] (6.482695055s elapsed)
STEP: Creating pod pod2 in namespace services-6785
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6785 to expose endpoints map[pod1:[80] pod2:[80]]
Aug 26 23:30:11.991: INFO: Unexpected endpoints: found map[35243b5c-f629-49de-8a1f-4127f1847811:[80]], expected map[pod1:[80] pod2:[80]] (4.253230748s elapsed, will retry)
Aug 26 23:30:13.001: INFO: successfully validated that service endpoint-test2 in namespace services-6785 exposes endpoints map[pod1:[80] pod2:[80]] (5.262957544s elapsed)
STEP: Deleting pod pod1 in namespace services-6785
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6785 to expose endpoints map[pod2:[80]]
Aug 26 23:30:14.083: INFO: successfully validated that service endpoint-test2 in namespace services-6785 exposes endpoints map[pod2:[80]] (1.077161329s elapsed)
STEP: Deleting pod pod2 in namespace services-6785
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6785 to expose endpoints map[]
Aug 26 23:30:15.099: INFO: successfully validated that service endpoint-test2 in namespace services-6785 exposes endpoints map[] (1.01300773s elapsed)
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:30:15.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-6785" for this suite.
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:15.100 seconds]
[sig-network] Services
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods  [Conformance]","total":278,"completed":189,"skipped":3060,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:30:15.164: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W0826 23:30:16.544894       6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug 26 23:30:16.544: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:30:16.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-7612" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":278,"completed":190,"skipped":3087,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:30:16.550: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-upd-33f60df3-7fce-4a5f-acf5-6c9291ba6b10
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:30:22.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8406" for this suite.

• [SLOW TEST:6.238 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":191,"skipped":3099,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl expose 
  should create services for rc  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:30:22.789: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should create services for rc  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating Agnhost RC
Aug 26 23:30:22.886: INFO: namespace kubectl-9341
Aug 26 23:30:22.886: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9341'
Aug 26 23:30:23.156: INFO: stderr: ""
Aug 26 23:30:23.156: INFO: stdout: "replicationcontroller/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Aug 26 23:30:24.161: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 26 23:30:24.161: INFO: Found 0 / 1
Aug 26 23:30:25.499: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 26 23:30:25.499: INFO: Found 0 / 1
Aug 26 23:30:26.160: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 26 23:30:26.160: INFO: Found 0 / 1
Aug 26 23:30:27.166: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 26 23:30:27.166: INFO: Found 1 / 1
Aug 26 23:30:27.166: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Aug 26 23:30:27.168: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 26 23:30:27.168: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Aug 26 23:30:27.168: INFO: wait on agnhost-master startup in kubectl-9341 
Aug 26 23:30:27.168: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs agnhost-master-qgdj2 agnhost-master --namespace=kubectl-9341'
Aug 26 23:30:27.334: INFO: stderr: ""
Aug 26 23:30:27.334: INFO: stdout: "Paused\n"
STEP: exposing RC
Aug 26 23:30:27.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-9341'
Aug 26 23:30:27.768: INFO: stderr: ""
Aug 26 23:30:27.768: INFO: stdout: "service/rm2 exposed\n"
Aug 26 23:30:27.837: INFO: Service rm2 in namespace kubectl-9341 found.
STEP: exposing service
Aug 26 23:30:29.842: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-9341'
Aug 26 23:30:30.024: INFO: stderr: ""
Aug 26 23:30:30.024: INFO: stdout: "service/rm3 exposed\n"
Aug 26 23:30:30.061: INFO: Service rm3 in namespace kubectl-9341 found.
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:30:32.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9341" for this suite.

• [SLOW TEST:9.287 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl expose
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1189
    should create services for rc  [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc  [Conformance]","total":278,"completed":192,"skipped":3138,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:30:32.077: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating secret secrets-33/secret-test-b778b9c9-84a6-470f-ac0d-3faf3427add4
STEP: Creating a pod to test consume secrets
Aug 26 23:30:32.175: INFO: Waiting up to 5m0s for pod "pod-configmaps-d803c47e-a277-4b2e-8dbb-a77bd084dd61" in namespace "secrets-33" to be "success or failure"
Aug 26 23:30:32.179: INFO: Pod "pod-configmaps-d803c47e-a277-4b2e-8dbb-a77bd084dd61": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010039ms
Aug 26 23:30:34.182: INFO: Pod "pod-configmaps-d803c47e-a277-4b2e-8dbb-a77bd084dd61": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007335611s
Aug 26 23:30:36.424: INFO: Pod "pod-configmaps-d803c47e-a277-4b2e-8dbb-a77bd084dd61": Phase="Pending", Reason="", readiness=false. Elapsed: 4.249537624s
Aug 26 23:30:38.428: INFO: Pod "pod-configmaps-d803c47e-a277-4b2e-8dbb-a77bd084dd61": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.25378971s
STEP: Saw pod success
Aug 26 23:30:38.428: INFO: Pod "pod-configmaps-d803c47e-a277-4b2e-8dbb-a77bd084dd61" satisfied condition "success or failure"
Aug 26 23:30:38.431: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-d803c47e-a277-4b2e-8dbb-a77bd084dd61 container env-test: 
STEP: delete the pod
Aug 26 23:30:38.615: INFO: Waiting for pod pod-configmaps-d803c47e-a277-4b2e-8dbb-a77bd084dd61 to disappear
Aug 26 23:30:38.618: INFO: Pod pod-configmaps-d803c47e-a277-4b2e-8dbb-a77bd084dd61 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:30:38.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-33" for this suite.

• [SLOW TEST:6.547 seconds]
[sig-api-machinery] Secrets
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":193,"skipped":3164,"failed":0}
S
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  should include custom resource definition resources in discovery documents [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:30:38.624: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] should include custom resource definition resources in discovery documents [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: fetching the /apis discovery document
STEP: finding the apiextensions.k8s.io API group in the /apis discovery document
STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document
STEP: fetching the /apis/apiextensions.k8s.io discovery document
STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document
STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document
STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:30:39.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-5001" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":278,"completed":194,"skipped":3165,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:30:39.046: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:30:43.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-6852" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":195,"skipped":3189,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-auth] ServiceAccounts
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:30:43.177: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: getting the auto-created API token
Aug 26 23:30:43.924: INFO: created pod pod-service-account-defaultsa
Aug 26 23:30:43.924: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Aug 26 23:30:44.028: INFO: created pod pod-service-account-mountsa
Aug 26 23:30:44.028: INFO: pod pod-service-account-mountsa service account token volume mount: true
Aug 26 23:30:44.036: INFO: created pod pod-service-account-nomountsa
Aug 26 23:30:44.036: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Aug 26 23:30:44.089: INFO: created pod pod-service-account-defaultsa-mountspec
Aug 26 23:30:44.089: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Aug 26 23:30:44.191: INFO: created pod pod-service-account-mountsa-mountspec
Aug 26 23:30:44.191: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Aug 26 23:30:44.211: INFO: created pod pod-service-account-nomountsa-mountspec
Aug 26 23:30:44.211: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Aug 26 23:30:44.278: INFO: created pod pod-service-account-defaultsa-nomountspec
Aug 26 23:30:44.278: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Aug 26 23:30:44.383: INFO: created pod pod-service-account-mountsa-nomountspec
Aug 26 23:30:44.383: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Aug 26 23:30:44.429: INFO: created pod pod-service-account-nomountsa-nomountspec
Aug 26 23:30:44.429: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:30:44.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-3209" for this suite.
•{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","total":278,"completed":196,"skipped":3236,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:30:44.654: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating the pod
Aug 26 23:31:00.203: INFO: Successfully updated pod "labelsupdateb55e0e22-f4db-4bd7-b6dc-7b3ca48b80be"
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:31:02.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4616" for this suite.

• [SLOW TEST:17.608 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":197,"skipped":3265,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:31:02.263: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Aug 26 23:31:02.374: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Aug 26 23:31:02.393: INFO: Waiting for terminating namespaces to be deleted...
Aug 26 23:31:02.395: INFO: 
Logging pods the kubelet thinks is on node jerma-worker before test
Aug 26 23:31:02.399: INFO: kube-proxy-lgd85 from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 26 23:31:02.399: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 26 23:31:02.399: INFO: kindnet-tfrcx from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 26 23:31:02.399: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 26 23:31:02.399: INFO: daemon-set-4l8wc from daemonsets-9371 started at 2020-08-20 20:09:22 +0000 UTC (1 container statuses recorded)
Aug 26 23:31:02.399: INFO: 	Container app ready: true, restart count 0
Aug 26 23:31:02.399: INFO: busybox-readonly-fse54c13ed-a424-431e-b358-45f2f87826d5 from kubelet-test-6852 started at 2020-08-26 23:30:39 +0000 UTC (1 container statuses recorded)
Aug 26 23:31:02.399: INFO: 	Container busybox-readonly-fse54c13ed-a424-431e-b358-45f2f87826d5 ready: true, restart count 0
Aug 26 23:31:02.399: INFO: 
Logging pods the kubelet thinks is on node jerma-worker2 before test
Aug 26 23:31:02.403: INFO: test-recreate-deployment-5f94c574ff-k4dkm from deployment-5601 started at 2020-08-23 04:50:56 +0000 UTC (1 container statuses recorded)
Aug 26 23:31:02.403: INFO: 	Container httpd ready: true, restart count 0
Aug 26 23:31:02.403: INFO: kindnet-gxck9 from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 26 23:31:02.403: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 26 23:31:02.403: INFO: daemon-set-cxv46 from daemonsets-9371 started at 2020-08-20 20:09:22 +0000 UTC (1 container statuses recorded)
Aug 26 23:31:02.403: INFO: 	Container app ready: true, restart count 0
Aug 26 23:31:02.403: INFO: labelsupdateb55e0e22-f4db-4bd7-b6dc-7b3ca48b80be from projected-4616 started at 2020-08-26 23:30:45 +0000 UTC (1 container statuses recorded)
Aug 26 23:31:02.403: INFO: 	Container client-container ready: true, restart count 0
Aug 26 23:31:02.403: INFO: kube-proxy-ckhpn from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 26 23:31:02.403: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-01cae9cd-0eb4-46ef-845c-c428a8e8fa79 95
STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled
STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled
STEP: removing the label kubernetes.io/e2e-01cae9cd-0eb4-46ef-845c-c428a8e8fa79 off the node jerma-worker2
STEP: verifying the node doesn't have the label kubernetes.io/e2e-01cae9cd-0eb4-46ef-845c-c428a8e8fa79
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:36:16.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-6204" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77

• [SLOW TEST:314.267 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":278,"completed":198,"skipped":3281,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:36:16.530: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should run and stop complex daemon [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 26 23:36:16.692: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Aug 26 23:36:16.699: INFO: Number of nodes with available pods: 0
Aug 26 23:36:16.699: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Aug 26 23:36:16.747: INFO: Number of nodes with available pods: 0
Aug 26 23:36:16.747: INFO: Node jerma-worker is running more than one daemon pod
Aug 26 23:36:17.750: INFO: Number of nodes with available pods: 0
Aug 26 23:36:17.750: INFO: Node jerma-worker is running more than one daemon pod
Aug 26 23:36:18.750: INFO: Number of nodes with available pods: 0
Aug 26 23:36:18.751: INFO: Node jerma-worker is running more than one daemon pod
Aug 26 23:36:19.751: INFO: Number of nodes with available pods: 0
Aug 26 23:36:19.751: INFO: Node jerma-worker is running more than one daemon pod
Aug 26 23:36:20.750: INFO: Number of nodes with available pods: 1
Aug 26 23:36:20.750: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Aug 26 23:36:20.840: INFO: Number of nodes with available pods: 1
Aug 26 23:36:20.840: INFO: Number of running nodes: 0, number of available pods: 1
Aug 26 23:36:21.861: INFO: Number of nodes with available pods: 0
Aug 26 23:36:21.861: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Aug 26 23:36:21.888: INFO: Number of nodes with available pods: 0
Aug 26 23:36:21.888: INFO: Node jerma-worker is running more than one daemon pod
Aug 26 23:36:22.892: INFO: Number of nodes with available pods: 0
Aug 26 23:36:22.892: INFO: Node jerma-worker is running more than one daemon pod
Aug 26 23:36:23.892: INFO: Number of nodes with available pods: 0
Aug 26 23:36:23.892: INFO: Node jerma-worker is running more than one daemon pod
Aug 26 23:36:24.892: INFO: Number of nodes with available pods: 0
Aug 26 23:36:24.892: INFO: Node jerma-worker is running more than one daemon pod
Aug 26 23:36:25.891: INFO: Number of nodes with available pods: 0
Aug 26 23:36:25.891: INFO: Node jerma-worker is running more than one daemon pod
Aug 26 23:36:26.892: INFO: Number of nodes with available pods: 0
Aug 26 23:36:26.892: INFO: Node jerma-worker is running more than one daemon pod
Aug 26 23:36:27.894: INFO: Number of nodes with available pods: 0
Aug 26 23:36:27.894: INFO: Node jerma-worker is running more than one daemon pod
Aug 26 23:36:28.892: INFO: Number of nodes with available pods: 0
Aug 26 23:36:28.892: INFO: Node jerma-worker is running more than one daemon pod
Aug 26 23:36:29.892: INFO: Number of nodes with available pods: 0
Aug 26 23:36:29.892: INFO: Node jerma-worker is running more than one daemon pod
Aug 26 23:36:30.892: INFO: Number of nodes with available pods: 0
Aug 26 23:36:30.892: INFO: Node jerma-worker is running more than one daemon pod
Aug 26 23:36:31.892: INFO: Number of nodes with available pods: 0
Aug 26 23:36:31.892: INFO: Node jerma-worker is running more than one daemon pod
Aug 26 23:36:32.892: INFO: Number of nodes with available pods: 0
Aug 26 23:36:32.892: INFO: Node jerma-worker is running more than one daemon pod
Aug 26 23:36:33.948: INFO: Number of nodes with available pods: 0
Aug 26 23:36:33.948: INFO: Node jerma-worker is running more than one daemon pod
Aug 26 23:36:34.893: INFO: Number of nodes with available pods: 0
Aug 26 23:36:34.893: INFO: Node jerma-worker is running more than one daemon pod
Aug 26 23:36:36.266: INFO: Number of nodes with available pods: 1
Aug 26 23:36:36.266: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7220, will wait for the garbage collector to delete the pods
Aug 26 23:36:36.686: INFO: Deleting DaemonSet.extensions daemon-set took: 136.182264ms
Aug 26 23:36:36.986: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.330345ms
Aug 26 23:36:51.820: INFO: Number of nodes with available pods: 0
Aug 26 23:36:51.820: INFO: Number of running nodes: 0, number of available pods: 0
Aug 26 23:36:51.823: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7220/daemonsets","resourceVersion":"4045469"},"items":null}

Aug 26 23:36:51.825: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7220/pods","resourceVersion":"4045469"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:36:51.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-7220" for this suite.

• [SLOW TEST:35.431 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop complex daemon [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":278,"completed":199,"skipped":3299,"failed":0}
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:36:51.960: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Aug 26 23:37:02.219: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 26 23:37:02.223: INFO: Pod pod-with-poststart-http-hook still exists
Aug 26 23:37:04.223: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 26 23:37:04.227: INFO: Pod pod-with-poststart-http-hook still exists
Aug 26 23:37:06.223: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 26 23:37:06.227: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:37:06.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-5561" for this suite.

• [SLOW TEST:14.274 seconds]
[k8s.io] Container Lifecycle Hook
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when create a pod with lifecycle hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":278,"completed":200,"skipped":3299,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:37:06.235: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Aug 26 23:37:14.971: INFO: 0 pods remaining
Aug 26 23:37:14.971: INFO: 0 pods has nil DeletionTimestamp
Aug 26 23:37:14.971: INFO: 
Aug 26 23:37:16.784: INFO: 0 pods remaining
Aug 26 23:37:16.784: INFO: 0 pods has nil DeletionTimestamp
Aug 26 23:37:16.784: INFO: 
Aug 26 23:37:17.159: INFO: 0 pods remaining
Aug 26 23:37:17.159: INFO: 0 pods has nil DeletionTimestamp
Aug 26 23:37:17.159: INFO: 
STEP: Gathering metrics
W0826 23:37:18.998469       6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug 26 23:37:18.998: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:37:18.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-7082" for this suite.

• [SLOW TEST:13.325 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":278,"completed":201,"skipped":3311,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:37:19.561: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should provide secure master service  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:37:21.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-8949" for this suite.
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143
•{"msg":"PASSED [sig-network] Services should provide secure master service  [Conformance]","total":278,"completed":202,"skipped":3322,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:37:22.316: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating the pod
Aug 26 23:37:29.884: INFO: Successfully updated pod "labelsupdatefacaae30-659c-4d5d-9652-447419ef7d15"
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:37:32.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8527" for this suite.

• [SLOW TEST:9.830 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":203,"skipped":3349,"failed":0}
SSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:37:32.146: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] deployment should support rollover [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 26 23:37:32.329: INFO: Pod name rollover-pod: Found 0 pods out of 1
Aug 26 23:37:37.338: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Aug 26 23:37:37.338: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Aug 26 23:37:39.341: INFO: Creating deployment "test-rollover-deployment"
Aug 26 23:37:39.350: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Aug 26 23:37:41.356: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Aug 26 23:37:41.361: INFO: Ensure that both replica sets have 1 created replica
Aug 26 23:37:41.366: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Aug 26 23:37:41.372: INFO: Updating deployment test-rollover-deployment
Aug 26 23:37:41.372: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Aug 26 23:37:43.669: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Aug 26 23:37:43.674: INFO: Make sure deployment "test-rollover-deployment" is complete
Aug 26 23:37:43.678: INFO: all replica sets need to contain the pod-template-hash label
Aug 26 23:37:43.678: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734081859, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734081859, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734081862, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734081859, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 23:37:46.043: INFO: all replica sets need to contain the pod-template-hash label
Aug 26 23:37:46.043: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734081859, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734081859, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734081862, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734081859, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 23:37:47.724: INFO: all replica sets need to contain the pod-template-hash label
Aug 26 23:37:47.724: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734081859, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734081859, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734081862, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734081859, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 23:37:49.701: INFO: all replica sets need to contain the pod-template-hash label
Aug 26 23:37:49.701: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734081859, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734081859, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734081869, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734081859, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 23:37:51.720: INFO: all replica sets need to contain the pod-template-hash label
Aug 26 23:37:51.720: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734081859, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734081859, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734081869, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734081859, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 23:37:53.848: INFO: all replica sets need to contain the pod-template-hash label
Aug 26 23:37:53.848: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734081859, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734081859, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734081869, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734081859, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 23:37:56.025: INFO: all replica sets need to contain the pod-template-hash label
Aug 26 23:37:56.025: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734081859, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734081859, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734081869, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734081859, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 23:37:57.753: INFO: all replica sets need to contain the pod-template-hash label
Aug 26 23:37:57.753: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734081859, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734081859, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734081869, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734081859, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 23:37:59.773: INFO: 
Aug 26 23:37:59.773: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Aug 26 23:37:59.780: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:{test-rollover-deployment  deployment-7680 /apis/apps/v1/namespaces/deployment-7680/deployments/test-rollover-deployment 574709ab-f31b-4e23-838a-11e043bdbe88 4045978 2 2020-08-26 23:37:39 +0000 UTC   map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002dadb88  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-08-26 23:37:39 +0000 UTC,LastTransitionTime:2020-08-26 23:37:39 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-574d6dfbff" has successfully progressed.,LastUpdateTime:2020-08-26 23:37:59 +0000 UTC,LastTransitionTime:2020-08-26 23:37:39 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

Aug 26 23:37:59.782: INFO: New ReplicaSet "test-rollover-deployment-574d6dfbff" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:{test-rollover-deployment-574d6dfbff  deployment-7680 /apis/apps/v1/namespaces/deployment-7680/replicasets/test-rollover-deployment-574d6dfbff 7f562b71-628c-4f39-9f8d-3e4e96deb395 4045967 2 2020-08-26 23:37:41 +0000 UTC   map[name:rollover-pod pod-template-hash:574d6dfbff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 574709ab-f31b-4e23-838a-11e043bdbe88 0xc0034ed9d7 0xc0034ed9d8}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 574d6dfbff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0034edaa8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Aug 26 23:37:59.782: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Aug 26 23:37:59.782: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller  deployment-7680 /apis/apps/v1/namespaces/deployment-7680/replicasets/test-rollover-controller aa799adc-dc96-4528-a8f5-b0d7cded08d9 4045976 2 2020-08-26 23:37:32 +0000 UTC   map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 574709ab-f31b-4e23-838a-11e043bdbe88 0xc0034ed8ef 0xc0034ed900}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0034ed968  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Aug 26 23:37:59.782: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c  deployment-7680 /apis/apps/v1/namespaces/deployment-7680/replicasets/test-rollover-deployment-f6c94f66c 47177eaf-d0fb-461b-9250-a4436e4d2030 4045895 2 2020-08-26 23:37:39 +0000 UTC   map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 574709ab-f31b-4e23-838a-11e043bdbe88 0xc0034edb10 0xc0034edb11}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] []  []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0034edb88  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Aug 26 23:37:59.785: INFO: Pod "test-rollover-deployment-574d6dfbff-sxr6t" is available:
&Pod{ObjectMeta:{test-rollover-deployment-574d6dfbff-sxr6t test-rollover-deployment-574d6dfbff- deployment-7680 /api/v1/namespaces/deployment-7680/pods/test-rollover-deployment-574d6dfbff-sxr6t 0b55f7e2-1679-4e52-8bbd-1aff7fcd0f9a 4045935 0 2020-08-26 23:37:42 +0000 UTC   map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [{apps/v1 ReplicaSet test-rollover-deployment-574d6dfbff 7f562b71-628c-4f39-9f8d-3e4e96deb395 0xc003522257 0xc003522258}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rp4gs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rp4gs,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rp4gs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:37:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:37:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:37:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:37:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.2.55,StartTime:2020-08-26 23:37:42 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-26 23:37:48 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://a1150262fd89531f1fe500eecc9750e9b03a01aedc4df8d427190d78e57df489,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.55,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:37:59.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-7680" for this suite.

• [SLOW TEST:27.649 seconds]
[sig-apps] Deployment
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":278,"completed":204,"skipped":3352,"failed":0}
S
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:37:59.795: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for ExternalName services [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test externalName service
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3683.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-3683.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3683.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-3683.svc.cluster.local; sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 26 23:38:10.318: INFO: DNS probes using dns-test-17c503c8-ffdf-4fdc-b0c0-13a8bf7da08b succeeded

STEP: deleting the pod
STEP: changing the externalName to bar.example.com
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3683.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-3683.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3683.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-3683.svc.cluster.local; sleep 1; done

STEP: creating a second pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 26 23:38:18.450: INFO: File wheezy_udp@dns-test-service-3.dns-3683.svc.cluster.local from pod  dns-3683/dns-test-37ccd69b-1dd4-422a-9bb8-62f20372592a contains 'foo.example.com.
' instead of 'bar.example.com.'
Aug 26 23:38:18.453: INFO: File jessie_udp@dns-test-service-3.dns-3683.svc.cluster.local from pod  dns-3683/dns-test-37ccd69b-1dd4-422a-9bb8-62f20372592a contains 'foo.example.com.
' instead of 'bar.example.com.'
Aug 26 23:38:18.453: INFO: Lookups using dns-3683/dns-test-37ccd69b-1dd4-422a-9bb8-62f20372592a failed for: [wheezy_udp@dns-test-service-3.dns-3683.svc.cluster.local jessie_udp@dns-test-service-3.dns-3683.svc.cluster.local]

Aug 26 23:38:23.458: INFO: File wheezy_udp@dns-test-service-3.dns-3683.svc.cluster.local from pod  dns-3683/dns-test-37ccd69b-1dd4-422a-9bb8-62f20372592a contains 'foo.example.com.
' instead of 'bar.example.com.'
Aug 26 23:38:23.462: INFO: File jessie_udp@dns-test-service-3.dns-3683.svc.cluster.local from pod  dns-3683/dns-test-37ccd69b-1dd4-422a-9bb8-62f20372592a contains 'foo.example.com.
' instead of 'bar.example.com.'
Aug 26 23:38:23.462: INFO: Lookups using dns-3683/dns-test-37ccd69b-1dd4-422a-9bb8-62f20372592a failed for: [wheezy_udp@dns-test-service-3.dns-3683.svc.cluster.local jessie_udp@dns-test-service-3.dns-3683.svc.cluster.local]

Aug 26 23:38:28.458: INFO: File wheezy_udp@dns-test-service-3.dns-3683.svc.cluster.local from pod  dns-3683/dns-test-37ccd69b-1dd4-422a-9bb8-62f20372592a contains 'foo.example.com.
' instead of 'bar.example.com.'
Aug 26 23:38:28.461: INFO: File jessie_udp@dns-test-service-3.dns-3683.svc.cluster.local from pod  dns-3683/dns-test-37ccd69b-1dd4-422a-9bb8-62f20372592a contains 'foo.example.com.
' instead of 'bar.example.com.'
Aug 26 23:38:28.461: INFO: Lookups using dns-3683/dns-test-37ccd69b-1dd4-422a-9bb8-62f20372592a failed for: [wheezy_udp@dns-test-service-3.dns-3683.svc.cluster.local jessie_udp@dns-test-service-3.dns-3683.svc.cluster.local]

Aug 26 23:38:33.530: INFO: File wheezy_udp@dns-test-service-3.dns-3683.svc.cluster.local from pod  dns-3683/dns-test-37ccd69b-1dd4-422a-9bb8-62f20372592a contains 'foo.example.com.
' instead of 'bar.example.com.'
Aug 26 23:38:33.534: INFO: File jessie_udp@dns-test-service-3.dns-3683.svc.cluster.local from pod  dns-3683/dns-test-37ccd69b-1dd4-422a-9bb8-62f20372592a contains '' instead of 'bar.example.com.'
Aug 26 23:38:33.534: INFO: Lookups using dns-3683/dns-test-37ccd69b-1dd4-422a-9bb8-62f20372592a failed for: [wheezy_udp@dns-test-service-3.dns-3683.svc.cluster.local jessie_udp@dns-test-service-3.dns-3683.svc.cluster.local]

Aug 26 23:38:38.457: INFO: File wheezy_udp@dns-test-service-3.dns-3683.svc.cluster.local from pod  dns-3683/dns-test-37ccd69b-1dd4-422a-9bb8-62f20372592a contains 'foo.example.com.
' instead of 'bar.example.com.'
Aug 26 23:38:38.460: INFO: File jessie_udp@dns-test-service-3.dns-3683.svc.cluster.local from pod  dns-3683/dns-test-37ccd69b-1dd4-422a-9bb8-62f20372592a contains 'foo.example.com.
' instead of 'bar.example.com.'
Aug 26 23:38:38.460: INFO: Lookups using dns-3683/dns-test-37ccd69b-1dd4-422a-9bb8-62f20372592a failed for: [wheezy_udp@dns-test-service-3.dns-3683.svc.cluster.local jessie_udp@dns-test-service-3.dns-3683.svc.cluster.local]

Aug 26 23:38:43.462: INFO: DNS probes using dns-test-37ccd69b-1dd4-422a-9bb8-62f20372592a succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3683.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-3683.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3683.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-3683.svc.cluster.local; sleep 1; done

STEP: creating a third pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 26 23:38:54.913: INFO: DNS probes using dns-test-e9c86557-536f-429a-a85c-6f91996732fc succeeded

STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:38:55.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-3683" for this suite.

• [SLOW TEST:55.241 seconds]
[sig-network] DNS
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":278,"completed":205,"skipped":3353,"failed":0}
S
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:38:55.037: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] deployment should support proportional scaling [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 26 23:38:55.157: INFO: Creating deployment "webserver-deployment"
Aug 26 23:38:55.167: INFO: Waiting for observed generation 1
Aug 26 23:38:57.663: INFO: Waiting for all required pods to come up
Aug 26 23:38:57.861: INFO: Pod name httpd: Found 10 pods out of 10
STEP: ensuring each pod is running
Aug 26 23:39:11.901: INFO: Waiting for deployment "webserver-deployment" to complete
Aug 26 23:39:11.910: INFO: Updating deployment "webserver-deployment" with a non-existent image
Aug 26 23:39:11.916: INFO: Updating deployment webserver-deployment
Aug 26 23:39:11.916: INFO: Waiting for observed generation 2
Aug 26 23:39:13.933: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Aug 26 23:39:13.935: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Aug 26 23:39:13.937: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas
Aug 26 23:39:13.943: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Aug 26 23:39:13.943: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Aug 26 23:39:13.947: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas
Aug 26 23:39:13.950: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas
Aug 26 23:39:13.950: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30
Aug 26 23:39:13.955: INFO: Updating deployment webserver-deployment
Aug 26 23:39:13.955: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas
Aug 26 23:39:14.818: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Aug 26 23:39:15.537: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Aug 26 23:39:18.690: INFO: Deployment "webserver-deployment":
&Deployment{ObjectMeta:{webserver-deployment  deployment-5606 /apis/apps/v1/namespaces/deployment-5606/deployments/webserver-deployment b09b85ac-2c26-46bd-921e-0280d5d2d149 4046590 3 2020-08-26 23:38:55 +0000 UTC   map[name:httpd] map[deployment.kubernetes.io/revision:2] [] []  []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00357e378  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-08-26 23:39:14 +0000 UTC,LastTransitionTime:2020-08-26 23:39:14 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-08-26 23:39:16 +0000 UTC,LastTransitionTime:2020-08-26 23:38:55 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},}

Aug 26 23:39:19.340: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment":
&ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8  deployment-5606 /apis/apps/v1/namespaces/deployment-5606/replicasets/webserver-deployment-c7997dcc8 fa9cfc2c-1cbf-4c28-9953-ceaf8ac18943 4046585 3 2020-08-26 23:39:11 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment b09b85ac-2c26-46bd-921e-0280d5d2d149 0xc0036717b7 0xc0036717b8}] []  []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003671828  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Aug 26 23:39:19.340: INFO: All old ReplicaSets of Deployment "webserver-deployment":
Aug 26 23:39:19.340: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587  deployment-5606 /apis/apps/v1/namespaces/deployment-5606/replicasets/webserver-deployment-595b5b9587 521fe245-c636-4ac7-8dde-6ae9647e775e 4046572 3 2020-08-26 23:38:55 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment b09b85ac-2c26-46bd-921e-0280d5d2d149 0xc0036716f7 0xc0036716f8}] []  []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003671758  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},}
Aug 26 23:39:19.912: INFO: Pod "webserver-deployment-595b5b9587-26sbq" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-26sbq webserver-deployment-595b5b9587- deployment-5606 /api/v1/namespaces/deployment-5606/pods/webserver-deployment-595b5b9587-26sbq 3a7d4648-430a-40b3-a2b9-96968a3a3cd2 4046370 0 2020-08-26 23:38:55 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 521fe245-c636-4ac7-8dde-6ae9647e775e 0xc00550b1d7 0xc00550b1d8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9nrgf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9nrgf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9nrgf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:38:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:39:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:39:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:38:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:10.244.1.129,StartTime:2020-08-26 23:38:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-26 23:39:06 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://eecceed24f0b29bd1e8fb233eaf9d52357513a180b96c43a518e42cb6a6d4043,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.129,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 26 23:39:19.912: INFO: Pod "webserver-deployment-595b5b9587-7qwdc" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-7qwdc webserver-deployment-595b5b9587- deployment-5606 /api/v1/namespaces/deployment-5606/pods/webserver-deployment-595b5b9587-7qwdc 048f8c4c-df1f-433b-9549-f1511657ad4a 4046426 0 2020-08-26 23:38:56 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 521fe245-c636-4ac7-8dde-6ae9647e775e 0xc00550b357 0xc00550b358}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9nrgf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9nrgf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9nrgf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:38:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:39:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:39:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:38:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:10.244.1.133,StartTime:2020-08-26 23:38:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-26 23:39:09 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://01f4c4d73ec53fd293941608beb76522169646c1d10fc65a9252a1f8d9377ada,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.133,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 26 23:39:19.913: INFO: Pod "webserver-deployment-595b5b9587-8q68p" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-8q68p webserver-deployment-595b5b9587- deployment-5606 /api/v1/namespaces/deployment-5606/pods/webserver-deployment-595b5b9587-8q68p fd034ce4-c06c-430a-ad7b-90b3e9001be9 4046401 0 2020-08-26 23:38:56 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 521fe245-c636-4ac7-8dde-6ae9647e775e 0xc00550b547 0xc00550b548}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9nrgf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9nrgf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9nrgf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:38:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:39:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:39:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:38:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:10.244.1.132,StartTime:2020-08-26 23:38:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-26 23:39:09 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://ffbf47d75efbf299bd42ea584c7b6133808623086199d573a3833e83618dbf38,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.132,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 26 23:39:19.913: INFO: Pod "webserver-deployment-595b5b9587-96dbt" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-96dbt webserver-deployment-595b5b9587- deployment-5606 /api/v1/namespaces/deployment-5606/pods/webserver-deployment-595b5b9587-96dbt 6562e230-ceaa-47fd-a9b6-737df82d3fa8 4046616 0 2020-08-26 23:39:14 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 521fe245-c636-4ac7-8dde-6ae9647e775e 0xc00550b6d7 0xc00550b6d8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9nrgf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9nrgf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9nrgf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:39:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:39:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:39:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:39:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-08-26 23:39:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 26 23:39:19.913: INFO: Pod "webserver-deployment-595b5b9587-bhngk" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-bhngk webserver-deployment-595b5b9587- deployment-5606 /api/v1/namespaces/deployment-5606/pods/webserver-deployment-595b5b9587-bhngk 16ea66d6-8d6d-448e-815c-73bbfde53737 4046613 0 2020-08-26 23:39:14 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 521fe245-c636-4ac7-8dde-6ae9647e775e 0xc00550b867 0xc00550b868}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9nrgf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9nrgf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9nrgf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:39:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:39:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:39:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:39:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2020-08-26 23:39:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 26 23:39:19.913: INFO: Pod "webserver-deployment-595b5b9587-bs5b6" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-bs5b6 webserver-deployment-595b5b9587- deployment-5606 /api/v1/namespaces/deployment-5606/pods/webserver-deployment-595b5b9587-bs5b6 33fe9a31-b7cb-4a26-80aa-ac7743cef3e2 4046380 0 2020-08-26 23:38:55 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 521fe245-c636-4ac7-8dde-6ae9647e775e 0xc00550b9f7 0xc00550b9f8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9nrgf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9nrgf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9nrgf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:38:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:39:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:39:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:38:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:10.244.1.131,StartTime:2020-08-26 23:38:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-26 23:39:06 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://41f63842fa553e05db987de4ac28955b3e30030c5ce20de5cc3fbf89e828d905,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.131,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 26 23:39:19.914: INFO: Pod "webserver-deployment-595b5b9587-gbt2n" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-gbt2n webserver-deployment-595b5b9587- deployment-5606 /api/v1/namespaces/deployment-5606/pods/webserver-deployment-595b5b9587-gbt2n 2ba08556-ddae-46a0-99da-97565b319f94 4046374 0 2020-08-26 23:38:55 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 521fe245-c636-4ac7-8dde-6ae9647e775e 0xc00550bb77 0xc00550bb78}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9nrgf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9nrgf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9nrgf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:38:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:39:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:39:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:38:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.2.59,StartTime:2020-08-26 23:38:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-26 23:39:07 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://6188a21a1a3b40953b2e1317c6abbc05aae727abeb1c1c8a9d568bd06530f15b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.59,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 26 23:39:19.914: INFO: Pod "webserver-deployment-595b5b9587-l5n27" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-l5n27 webserver-deployment-595b5b9587- deployment-5606 /api/v1/namespaces/deployment-5606/pods/webserver-deployment-595b5b9587-l5n27 45a9477e-2bec-4e6a-927b-1411359b63ce 4046599 0 2020-08-26 23:39:14 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 521fe245-c636-4ac7-8dde-6ae9647e775e 0xc00550bd07 0xc00550bd08}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9nrgf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9nrgf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9nrgf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:39:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:39:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:39:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:39:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-08-26 23:39:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 26 23:39:19.914: INFO: Pod "webserver-deployment-595b5b9587-ltstd" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-ltstd webserver-deployment-595b5b9587- deployment-5606 /api/v1/namespaces/deployment-5606/pods/webserver-deployment-595b5b9587-ltstd 86525200-d0d1-4836-b1fb-78f9d5144191 4046625 0 2020-08-26 23:39:14 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 521fe245-c636-4ac7-8dde-6ae9647e775e 0xc00550be67 0xc00550be68}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9nrgf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9nrgf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9nrgf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:39:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:39:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:39:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:39:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-08-26 23:39:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 26 23:39:19.914: INFO: Pod "webserver-deployment-595b5b9587-pmfbf" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-pmfbf webserver-deployment-595b5b9587- deployment-5606 /api/v1/namespaces/deployment-5606/pods/webserver-deployment-595b5b9587-pmfbf 7ee2d930-a82b-4bb6-9342-20cf20c58273 4046565 0 2020-08-26 23:39:15 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 521fe245-c636-4ac7-8dde-6ae9647e775e 0xc00550bfe7 0xc00550bfe8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9nrgf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9nrgf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9nrgf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:39:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 26 23:39:19.914: INFO: Pod "webserver-deployment-595b5b9587-pwpjh" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-pwpjh webserver-deployment-595b5b9587- deployment-5606 /api/v1/namespaces/deployment-5606/pods/webserver-deployment-595b5b9587-pwpjh b8652e28-a7fb-46d5-85b5-68ce0e7ef14c 4046568 0 2020-08-26 23:39:15 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 521fe245-c636-4ac7-8dde-6ae9647e775e 0xc003814107 0xc003814108}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9nrgf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9nrgf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9nrgf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:39:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 26 23:39:19.914: INFO: Pod "webserver-deployment-595b5b9587-r4vp7" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-r4vp7 webserver-deployment-595b5b9587- deployment-5606 /api/v1/namespaces/deployment-5606/pods/webserver-deployment-595b5b9587-r4vp7 95dc0735-6202-46cd-99ac-30f3363e6469 4046398 0 2020-08-26 23:38:55 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 521fe245-c636-4ac7-8dde-6ae9647e775e 0xc003814227 0xc003814228}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9nrgf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9nrgf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9nrgf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:38:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:39:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:39:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:38:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.2.61,StartTime:2020-08-26 23:38:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-26 23:39:09 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://327b22c689c6b1f2f9a5072fe13c69d49b0b8c6d1b7509d70a74c11798b35d2f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.61,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 26 23:39:19.915: INFO: Pod "webserver-deployment-595b5b9587-r67jz" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-r67jz webserver-deployment-595b5b9587- deployment-5606 /api/v1/namespaces/deployment-5606/pods/webserver-deployment-595b5b9587-r67jz 6a308e8d-977c-4fb3-9d38-9b8a6367ca33 4046573 0 2020-08-26 23:39:14 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 521fe245-c636-4ac7-8dde-6ae9647e775e 0xc0038143a7 0xc0038143a8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9nrgf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9nrgf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9nrgf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:39:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:39:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:39:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:39:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-08-26 23:39:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 26 23:39:19.915: INFO: Pod "webserver-deployment-595b5b9587-t4c8f" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-t4c8f webserver-deployment-595b5b9587- deployment-5606 /api/v1/namespaces/deployment-5606/pods/webserver-deployment-595b5b9587-t4c8f 5034918f-3718-4823-8550-2465f6539116 4046564 0 2020-08-26 23:39:15 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 521fe245-c636-4ac7-8dde-6ae9647e775e 0xc003814547 0xc003814548}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9nrgf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9nrgf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9nrgf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:39:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 26 23:39:19.915: INFO: Pod "webserver-deployment-595b5b9587-t5fn6" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-t5fn6 webserver-deployment-595b5b9587- deployment-5606 /api/v1/namespaces/deployment-5606/pods/webserver-deployment-595b5b9587-t5fn6 24a306ff-f814-4664-9336-54591a2451c8 4046609 0 2020-08-26 23:39:14 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 521fe245-c636-4ac7-8dde-6ae9647e775e 0xc003814787 0xc003814788}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9nrgf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9nrgf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9nrgf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:39:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:39:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:39:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:39:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2020-08-26 23:39:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 26 23:39:19.915: INFO: Pod "webserver-deployment-595b5b9587-tvvj2" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-tvvj2 webserver-deployment-595b5b9587- deployment-5606 /api/v1/namespaces/deployment-5606/pods/webserver-deployment-595b5b9587-tvvj2 a6e84809-41ca-45d0-845f-e35e7bb17c00 4046342 0 2020-08-26 23:38:55 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 521fe245-c636-4ac7-8dde-6ae9647e775e 0xc003814a17 0xc003814a18}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9nrgf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9nrgf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9nrgf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:38:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:39:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:39:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:38:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.2.58,StartTime:2020-08-26 23:38:55 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-26 23:39:03 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://c45f3a61ed9aa192918d3d5717a6b1d2042dd383f10b18e3fa1996ff6f1ead93,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.58,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 26 23:39:19.915: INFO: Pod "webserver-deployment-595b5b9587-tz4w2" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-tz4w2 webserver-deployment-595b5b9587- deployment-5606 /api/v1/namespaces/deployment-5606/pods/webserver-deployment-595b5b9587-tz4w2 70de1bfb-5051-48bf-be7e-c11f725bdf5f 4046365 0 2020-08-26 23:38:55 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 521fe245-c636-4ac7-8dde-6ae9647e775e 0xc003814b97 0xc003814b98}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9nrgf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9nrgf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9nrgf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:38:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:39:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:39:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:38:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:10.244.1.130,StartTime:2020-08-26 23:38:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-26 23:39:06 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://775f7495898b56702c77b74567d9c3e9a6467966c23b13d4849202515341eaf7,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.130,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 26 23:39:19.916: INFO: Pod "webserver-deployment-595b5b9587-wqpt2" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-wqpt2 webserver-deployment-595b5b9587- deployment-5606 /api/v1/namespaces/deployment-5606/pods/webserver-deployment-595b5b9587-wqpt2 3b95f1ef-f3a2-4db4-934f-572d14df2f23 4046608 0 2020-08-26 23:39:14 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 521fe245-c636-4ac7-8dde-6ae9647e775e 0xc003814d17 0xc003814d18}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9nrgf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9nrgf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9nrgf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:39:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:39:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:39:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:39:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-08-26 23:39:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 26 23:39:19.916: INFO: Pod "webserver-deployment-595b5b9587-xtb6p" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-xtb6p webserver-deployment-595b5b9587- deployment-5606 /api/v1/namespaces/deployment-5606/pods/webserver-deployment-595b5b9587-xtb6p dede8999-bec7-43c6-ab9d-133f015424af 4046566 0 2020-08-26 23:39:15 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 521fe245-c636-4ac7-8dde-6ae9647e775e 0xc003814eb7 0xc003814eb8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9nrgf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9nrgf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9nrgf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:39:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 26 23:39:19.916: INFO: Pod "webserver-deployment-595b5b9587-zmmh7" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-zmmh7 webserver-deployment-595b5b9587- deployment-5606 /api/v1/namespaces/deployment-5606/pods/webserver-deployment-595b5b9587-zmmh7 47fa793f-e30e-4e53-9428-6794a3d77bdb 4046567 0 2020-08-26 23:39:15 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 521fe245-c636-4ac7-8dde-6ae9647e775e 0xc003814fe7 0xc003814fe8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9nrgf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9nrgf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9nrgf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:39:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 26 23:39:19.916: INFO: Pod "webserver-deployment-c7997dcc8-75h58" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-75h58 webserver-deployment-c7997dcc8- deployment-5606 /api/v1/namespaces/deployment-5606/pods/webserver-deployment-c7997dcc8-75h58 4871125b-b87d-48f7-93a9-617ef47afa56 4046619 0 2020-08-26 23:39:15 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 fa9cfc2c-1cbf-4c28-9953-ceaf8ac18943 0xc003815117 0xc003815118}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9nrgf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9nrgf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9nrgf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:39:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:39:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:39:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:39:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2020-08-26 23:39:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 26 23:39:19.917: INFO: Pod "webserver-deployment-c7997dcc8-7jqfn" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-7jqfn webserver-deployment-c7997dcc8- deployment-5606 /api/v1/namespaces/deployment-5606/pods/webserver-deployment-c7997dcc8-7jqfn d59a925b-9ab6-49cc-ba8c-f0d4179fa976 4046610 0 2020-08-26 23:39:14 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 fa9cfc2c-1cbf-4c28-9953-ceaf8ac18943 0xc0038152a7 0xc0038152a8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9nrgf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9nrgf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9nrgf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:39:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:39:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:39:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:39:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-08-26 23:39:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 26 23:39:19.917: INFO: Pod "webserver-deployment-c7997dcc8-945mm" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-945mm webserver-deployment-c7997dcc8- deployment-5606 /api/v1/namespaces/deployment-5606/pods/webserver-deployment-c7997dcc8-945mm 508aafdb-7664-405e-aa6a-02499ac8ab10 4046563 0 2020-08-26 23:39:15 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 fa9cfc2c-1cbf-4c28-9953-ceaf8ac18943 0xc003815427 0xc003815428}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9nrgf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9nrgf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9nrgf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:39:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 26 23:39:19.917: INFO: Pod "webserver-deployment-c7997dcc8-9f8x9" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-9f8x9 webserver-deployment-c7997dcc8- deployment-5606 /api/v1/namespaces/deployment-5606/pods/webserver-deployment-c7997dcc8-9f8x9 f86bcd71-a729-40e4-88f7-0ea133e87b5a 4046562 0 2020-08-26 23:39:15 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 fa9cfc2c-1cbf-4c28-9953-ceaf8ac18943 0xc003815567 0xc003815568}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9nrgf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9nrgf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9nrgf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:39:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 26 23:39:19.917: INFO: Pod "webserver-deployment-c7997dcc8-dqtg4" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-dqtg4 webserver-deployment-c7997dcc8- deployment-5606 /api/v1/namespaces/deployment-5606/pods/webserver-deployment-c7997dcc8-dqtg4 e4ba519f-323e-4467-87bb-57c5de143833 4046498 0 2020-08-26 23:39:12 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 fa9cfc2c-1cbf-4c28-9953-ceaf8ac18943 0xc003815697 0xc003815698}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9nrgf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9nrgf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9nrgf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:39:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:39:12 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:39:12 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:39:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2020-08-26 23:39:12 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 26 23:39:19.917: INFO: Pod "webserver-deployment-c7997dcc8-drzqg" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-drzqg webserver-deployment-c7997dcc8- deployment-5606 /api/v1/namespaces/deployment-5606/pods/webserver-deployment-c7997dcc8-drzqg a9d3a0f7-d5f8-4a6f-9b3c-cbfa85f94a48 4046480 0 2020-08-26 23:39:11 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 fa9cfc2c-1cbf-4c28-9953-ceaf8ac18943 0xc003815827 0xc003815828}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9nrgf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9nrgf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9nrgf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:39:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:39:12 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:39:12 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:39:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2020-08-26 23:39:12 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 26 23:39:19.917: INFO: Pod "webserver-deployment-c7997dcc8-n2wjj" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-n2wjj webserver-deployment-c7997dcc8- deployment-5606 /api/v1/namespaces/deployment-5606/pods/webserver-deployment-c7997dcc8-n2wjj 03ba6dec-96f8-4b3f-9790-d19884ad0c71 4046475 0 2020-08-26 23:39:11 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 fa9cfc2c-1cbf-4c28-9953-ceaf8ac18943 0xc0038159c7 0xc0038159c8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9nrgf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9nrgf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9nrgf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:39:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:39:12 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:39:12 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:39:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-08-26 23:39:12 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 26 23:39:19.918: INFO: Pod "webserver-deployment-c7997dcc8-n4sv7" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-n4sv7 webserver-deployment-c7997dcc8- deployment-5606 /api/v1/namespaces/deployment-5606/pods/webserver-deployment-c7997dcc8-n4sv7 845e0521-999c-44a6-9a51-784346be2ebc 4046502 0 2020-08-26 23:39:12 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 fa9cfc2c-1cbf-4c28-9953-ceaf8ac18943 0xc003815b47 0xc003815b48}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9nrgf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9nrgf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9nrgf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:39:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:39:12 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:39:12 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:39:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-08-26 23:39:12 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 26 23:39:19.918: INFO: Pod "webserver-deployment-c7997dcc8-qd5ks" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-qd5ks webserver-deployment-c7997dcc8- deployment-5606 /api/v1/namespaces/deployment-5606/pods/webserver-deployment-c7997dcc8-qd5ks 7b74bf62-eebc-4748-9881-a3a97d4aaa6d 4046584 0 2020-08-26 23:39:14 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 fa9cfc2c-1cbf-4c28-9953-ceaf8ac18943 0xc003815cc7 0xc003815cc8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9nrgf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9nrgf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9nrgf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:39:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:39:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:39:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:39:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2020-08-26 23:39:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 26 23:39:19.918: INFO: Pod "webserver-deployment-c7997dcc8-tj2hg" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-tj2hg webserver-deployment-c7997dcc8- deployment-5606 /api/v1/namespaces/deployment-5606/pods/webserver-deployment-c7997dcc8-tj2hg 8e9c4d44-cd0e-444a-af97-e79a9fae48b5 4046561 0 2020-08-26 23:39:15 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 fa9cfc2c-1cbf-4c28-9953-ceaf8ac18943 0xc003815e47 0xc003815e48}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9nrgf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9nrgf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9nrgf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:39:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 26 23:39:19.918: INFO: Pod "webserver-deployment-c7997dcc8-vsb8d" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-vsb8d webserver-deployment-c7997dcc8- deployment-5606 /api/v1/namespaces/deployment-5606/pods/webserver-deployment-c7997dcc8-vsb8d 0235b907-8512-40ab-a9de-575d47517b0e 4046601 0 2020-08-26 23:39:14 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 fa9cfc2c-1cbf-4c28-9953-ceaf8ac18943 0xc003815f77 0xc003815f78}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9nrgf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9nrgf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9nrgf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:39:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:39:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:39:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:39:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2020-08-26 23:39:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 26 23:39:19.918: INFO: Pod "webserver-deployment-c7997dcc8-xbgn2" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-xbgn2 webserver-deployment-c7997dcc8- deployment-5606 /api/v1/namespaces/deployment-5606/pods/webserver-deployment-c7997dcc8-xbgn2 cfb24c09-eb87-4e67-8410-b7180713d22a 4046570 0 2020-08-26 23:39:15 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 fa9cfc2c-1cbf-4c28-9953-ceaf8ac18943 0xc00369a0f7 0xc00369a0f8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9nrgf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9nrgf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9nrgf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:39:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 26 23:39:19.918: INFO: Pod "webserver-deployment-c7997dcc8-z4mnl" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-z4mnl webserver-deployment-c7997dcc8- deployment-5606 /api/v1/namespaces/deployment-5606/pods/webserver-deployment-c7997dcc8-z4mnl 6c52bbbe-c3ed-441f-bc07-e9cb5532dcec 4046490 0 2020-08-26 23:39:11 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 fa9cfc2c-1cbf-4c28-9953-ceaf8ac18943 0xc00369a237 0xc00369a238}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9nrgf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9nrgf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9nrgf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:39:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:39:12 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:39:12 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:39:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-08-26 23:39:12 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:39:19.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-5606" for this suite.

• [SLOW TEST:25.793 seconds]
[sig-apps] Deployment
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":278,"completed":206,"skipped":3354,"failed":0}
SS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:39:20.830: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0826 23:40:04.938842       6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug 26 23:40:04.938: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:40:04.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-3036" for this suite.

• [SLOW TEST:44.436 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":278,"completed":207,"skipped":3356,"failed":0}
SS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate pod and apply defaults after mutation [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:40:05.266: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 26 23:40:07.885: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 26 23:40:09.985: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734082007, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734082007, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734082008, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734082007, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 23:40:12.238: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734082007, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734082007, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734082008, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734082007, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 23:40:14.057: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734082007, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734082007, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734082008, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734082007, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 26 23:40:17.474: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate pod and apply defaults after mutation [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the mutating pod webhook via the AdmissionRegistration API
STEP: create a pod that should be updated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:40:17.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7433" for this suite.
STEP: Destroying namespace "webhook-7433-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:12.780 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate pod and apply defaults after mutation [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":278,"completed":208,"skipped":3358,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:40:18.047: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 26 23:40:18.223: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d55da967-aa66-4599-8c81-da16874db688" in namespace "downward-api-21" to be "success or failure"
Aug 26 23:40:18.287: INFO: Pod "downwardapi-volume-d55da967-aa66-4599-8c81-da16874db688": Phase="Pending", Reason="", readiness=false. Elapsed: 63.957439ms
Aug 26 23:40:20.291: INFO: Pod "downwardapi-volume-d55da967-aa66-4599-8c81-da16874db688": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068114439s
Aug 26 23:40:22.296: INFO: Pod "downwardapi-volume-d55da967-aa66-4599-8c81-da16874db688": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.072752823s
STEP: Saw pod success
Aug 26 23:40:22.296: INFO: Pod "downwardapi-volume-d55da967-aa66-4599-8c81-da16874db688" satisfied condition "success or failure"
Aug 26 23:40:22.299: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-d55da967-aa66-4599-8c81-da16874db688 container client-container: 
STEP: delete the pod
Aug 26 23:40:22.351: INFO: Waiting for pod downwardapi-volume-d55da967-aa66-4599-8c81-da16874db688 to disappear
Aug 26 23:40:22.364: INFO: Pod downwardapi-volume-d55da967-aa66-4599-8c81-da16874db688 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:40:22.364: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-21" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":209,"skipped":3382,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicationController
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:40:22.370: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:40:27.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-1979" for this suite.

• [SLOW TEST:5.233 seconds]
[sig-apps] ReplicationController
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":278,"completed":210,"skipped":3393,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:40:27.604: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should invoke init containers on a RestartNever pod [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
Aug 26 23:40:27.688: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:40:35.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-8664" for this suite.

• [SLOW TEST:8.701 seconds]
[k8s.io] InitContainer [NodeConformance]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should invoke init containers on a RestartNever pod [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":278,"completed":211,"skipped":3417,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:40:36.305: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name projected-secret-test-map-8790d7c1-7ebc-4561-a8af-91f8467c6139
STEP: Creating a pod to test consume secrets
Aug 26 23:40:36.738: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-156a0c60-9fb3-4002-85d8-20f562ffa5fb" in namespace "projected-1548" to be "success or failure"
Aug 26 23:40:36.754: INFO: Pod "pod-projected-secrets-156a0c60-9fb3-4002-85d8-20f562ffa5fb": Phase="Pending", Reason="", readiness=false. Elapsed: 16.444066ms
Aug 26 23:40:38.758: INFO: Pod "pod-projected-secrets-156a0c60-9fb3-4002-85d8-20f562ffa5fb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020262071s
Aug 26 23:40:40.762: INFO: Pod "pod-projected-secrets-156a0c60-9fb3-4002-85d8-20f562ffa5fb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024235454s
STEP: Saw pod success
Aug 26 23:40:40.762: INFO: Pod "pod-projected-secrets-156a0c60-9fb3-4002-85d8-20f562ffa5fb" satisfied condition "success or failure"
Aug 26 23:40:40.765: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-156a0c60-9fb3-4002-85d8-20f562ffa5fb container projected-secret-volume-test: 
STEP: delete the pod
Aug 26 23:40:40.960: INFO: Waiting for pod pod-projected-secrets-156a0c60-9fb3-4002-85d8-20f562ffa5fb to disappear
Aug 26 23:40:41.088: INFO: Pod pod-projected-secrets-156a0c60-9fb3-4002-85d8-20f562ffa5fb no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:40:41.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1548" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":212,"skipped":3443,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:40:41.127: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-4415
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a new StatefulSet
Aug 26 23:40:41.730: INFO: Found 0 stateful pods, waiting for 3
Aug 26 23:40:51.867: INFO: Found 2 stateful pods, waiting for 3
Aug 26 23:41:01.734: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 26 23:41:01.734: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 26 23:41:01.734: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Aug 26 23:41:01.742: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4415 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 26 23:41:04.768: INFO: stderr: "I0826 23:41:04.620071    3634 log.go:172] (0xc000968bb0) (0xc0007923c0) Create stream\nI0826 23:41:04.620120    3634 log.go:172] (0xc000968bb0) (0xc0007923c0) Stream added, broadcasting: 1\nI0826 23:41:04.623284    3634 log.go:172] (0xc000968bb0) Reply frame received for 1\nI0826 23:41:04.623321    3634 log.go:172] (0xc000968bb0) (0xc000792460) Create stream\nI0826 23:41:04.623331    3634 log.go:172] (0xc000968bb0) (0xc000792460) Stream added, broadcasting: 3\nI0826 23:41:04.624442    3634 log.go:172] (0xc000968bb0) Reply frame received for 3\nI0826 23:41:04.624484    3634 log.go:172] (0xc000968bb0) (0xc0007d8000) Create stream\nI0826 23:41:04.624495    3634 log.go:172] (0xc000968bb0) (0xc0007d8000) Stream added, broadcasting: 5\nI0826 23:41:04.625644    3634 log.go:172] (0xc000968bb0) Reply frame received for 5\nI0826 23:41:04.725907    3634 log.go:172] (0xc000968bb0) Data frame received for 5\nI0826 23:41:04.725932    3634 log.go:172] (0xc0007d8000) (5) Data frame handling\nI0826 23:41:04.725948    3634 log.go:172] (0xc0007d8000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0826 23:41:04.757647    3634 log.go:172] (0xc000968bb0) Data frame received for 3\nI0826 23:41:04.757676    3634 log.go:172] (0xc000792460) (3) Data frame handling\nI0826 23:41:04.757689    3634 log.go:172] (0xc000792460) (3) Data frame sent\nI0826 23:41:04.757695    3634 log.go:172] (0xc000968bb0) Data frame received for 3\nI0826 23:41:04.757700    3634 log.go:172] (0xc000792460) (3) Data frame handling\nI0826 23:41:04.757986    3634 log.go:172] (0xc000968bb0) Data frame received for 5\nI0826 23:41:04.758016    3634 log.go:172] (0xc0007d8000) (5) Data frame handling\nI0826 23:41:04.759690    3634 log.go:172] (0xc000968bb0) Data frame received for 1\nI0826 23:41:04.759704    3634 log.go:172] (0xc0007923c0) (1) Data frame handling\nI0826 23:41:04.759716    3634 log.go:172] (0xc0007923c0) (1) Data frame sent\nI0826 23:41:04.759733    3634 log.go:172] (0xc000968bb0) (0xc0007923c0) Stream removed, broadcasting: 1\nI0826 23:41:04.759845    3634 log.go:172] (0xc000968bb0) Go away received\nI0826 23:41:04.759981    3634 log.go:172] (0xc000968bb0) (0xc0007923c0) Stream removed, broadcasting: 1\nI0826 23:41:04.759993    3634 log.go:172] (0xc000968bb0) (0xc000792460) Stream removed, broadcasting: 3\nI0826 23:41:04.759998    3634 log.go:172] (0xc000968bb0) (0xc0007d8000) Stream removed, broadcasting: 5\n"
Aug 26 23:41:04.768: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 26 23:41:04.768: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine
Aug 26 23:41:14.801: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Aug 26 23:41:24.832: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4415 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 23:41:25.055: INFO: stderr: "I0826 23:41:24.965161    3664 log.go:172] (0xc000ad0a50) (0xc000902140) Create stream\nI0826 23:41:24.965229    3664 log.go:172] (0xc000ad0a50) (0xc000902140) Stream added, broadcasting: 1\nI0826 23:41:24.967871    3664 log.go:172] (0xc000ad0a50) Reply frame received for 1\nI0826 23:41:24.967936    3664 log.go:172] (0xc000ad0a50) (0xc0009021e0) Create stream\nI0826 23:41:24.967968    3664 log.go:172] (0xc000ad0a50) (0xc0009021e0) Stream added, broadcasting: 3\nI0826 23:41:24.969198    3664 log.go:172] (0xc000ad0a50) Reply frame received for 3\nI0826 23:41:24.969251    3664 log.go:172] (0xc000ad0a50) (0xc000902280) Create stream\nI0826 23:41:24.969276    3664 log.go:172] (0xc000ad0a50) (0xc000902280) Stream added, broadcasting: 5\nI0826 23:41:24.970344    3664 log.go:172] (0xc000ad0a50) Reply frame received for 5\nI0826 23:41:25.043271    3664 log.go:172] (0xc000ad0a50) Data frame received for 3\nI0826 23:41:25.043304    3664 log.go:172] (0xc0009021e0) (3) Data frame handling\nI0826 23:41:25.043344    3664 log.go:172] (0xc000ad0a50) Data frame received for 5\nI0826 23:41:25.043372    3664 log.go:172] (0xc000902280) (5) Data frame handling\nI0826 23:41:25.043392    3664 log.go:172] (0xc000902280) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0826 23:41:25.043414    3664 log.go:172] (0xc0009021e0) (3) Data frame sent\nI0826 23:41:25.043454    3664 log.go:172] (0xc000ad0a50) Data frame received for 3\nI0826 23:41:25.043466    3664 log.go:172] (0xc0009021e0) (3) Data frame handling\nI0826 23:41:25.043486    3664 log.go:172] (0xc000ad0a50) Data frame received for 5\nI0826 23:41:25.043498    3664 log.go:172] (0xc000902280) (5) Data frame handling\nI0826 23:41:25.044705    3664 log.go:172] (0xc000ad0a50) Data frame received for 1\nI0826 23:41:25.044846    3664 log.go:172] (0xc000902140) (1) Data frame handling\nI0826 23:41:25.044868    3664 log.go:172] (0xc000902140) (1) Data frame sent\nI0826 23:41:25.044883    3664 log.go:172] (0xc000ad0a50) (0xc000902140) Stream removed, broadcasting: 1\nI0826 23:41:25.044965    3664 log.go:172] (0xc000ad0a50) Go away received\nI0826 23:41:25.045385    3664 log.go:172] (0xc000ad0a50) (0xc000902140) Stream removed, broadcasting: 1\nI0826 23:41:25.045407    3664 log.go:172] (0xc000ad0a50) (0xc0009021e0) Stream removed, broadcasting: 3\nI0826 23:41:25.045417    3664 log.go:172] (0xc000ad0a50) (0xc000902280) Stream removed, broadcasting: 5\n"
Aug 26 23:41:25.055: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Aug 26 23:41:25.055: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

STEP: Rolling back to a previous revision
Aug 26 23:41:55.135: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4415 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 26 23:41:55.422: INFO: stderr: "I0826 23:41:55.261044    3686 log.go:172] (0xc000910000) (0xc0005ba6e0) Create stream\nI0826 23:41:55.261098    3686 log.go:172] (0xc000910000) (0xc0005ba6e0) Stream added, broadcasting: 1\nI0826 23:41:55.263199    3686 log.go:172] (0xc000910000) Reply frame received for 1\nI0826 23:41:55.263239    3686 log.go:172] (0xc000910000) (0xc000948000) Create stream\nI0826 23:41:55.263247    3686 log.go:172] (0xc000910000) (0xc000948000) Stream added, broadcasting: 3\nI0826 23:41:55.264053    3686 log.go:172] (0xc000910000) Reply frame received for 3\nI0826 23:41:55.264079    3686 log.go:172] (0xc000910000) (0xc0007b86e0) Create stream\nI0826 23:41:55.264087    3686 log.go:172] (0xc000910000) (0xc0007b86e0) Stream added, broadcasting: 5\nI0826 23:41:55.265137    3686 log.go:172] (0xc000910000) Reply frame received for 5\nI0826 23:41:55.343347    3686 log.go:172] (0xc000910000) Data frame received for 5\nI0826 23:41:55.343393    3686 log.go:172] (0xc0007b86e0) (5) Data frame handling\nI0826 23:41:55.343430    3686 log.go:172] (0xc0007b86e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0826 23:41:55.411297    3686 log.go:172] (0xc000910000) Data frame received for 3\nI0826 23:41:55.411327    3686 log.go:172] (0xc000948000) (3) Data frame handling\nI0826 23:41:55.411338    3686 log.go:172] (0xc000948000) (3) Data frame sent\nI0826 23:41:55.411344    3686 log.go:172] (0xc000910000) Data frame received for 3\nI0826 23:41:55.411348    3686 log.go:172] (0xc000948000) (3) Data frame handling\nI0826 23:41:55.411588    3686 log.go:172] (0xc000910000) Data frame received for 5\nI0826 23:41:55.411612    3686 log.go:172] (0xc0007b86e0) (5) Data frame handling\nI0826 23:41:55.413184    3686 log.go:172] (0xc000910000) Data frame received for 1\nI0826 23:41:55.413210    3686 log.go:172] (0xc0005ba6e0) (1) Data frame handling\nI0826 23:41:55.413241    3686 log.go:172] (0xc0005ba6e0) (1) Data frame sent\nI0826 23:41:55.413260    3686 log.go:172] (0xc000910000) (0xc0005ba6e0) Stream removed, broadcasting: 1\nI0826 23:41:55.413385    3686 log.go:172] (0xc000910000) Go away received\nI0826 23:41:55.413747    3686 log.go:172] (0xc000910000) (0xc0005ba6e0) Stream removed, broadcasting: 1\nI0826 23:41:55.413766    3686 log.go:172] (0xc000910000) (0xc000948000) Stream removed, broadcasting: 3\nI0826 23:41:55.413775    3686 log.go:172] (0xc000910000) (0xc0007b86e0) Stream removed, broadcasting: 5\n"
Aug 26 23:41:55.422: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 26 23:41:55.422: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Aug 26 23:42:05.452: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Aug 26 23:42:15.486: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4415 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 23:42:15.692: INFO: stderr: "I0826 23:42:15.617799    3705 log.go:172] (0xc000aeae70) (0xc000ad2460) Create stream\nI0826 23:42:15.617844    3705 log.go:172] (0xc000aeae70) (0xc000ad2460) Stream added, broadcasting: 1\nI0826 23:42:15.619579    3705 log.go:172] (0xc000aeae70) Reply frame received for 1\nI0826 23:42:15.619625    3705 log.go:172] (0xc000aeae70) (0xc000a3c460) Create stream\nI0826 23:42:15.619641    3705 log.go:172] (0xc000aeae70) (0xc000a3c460) Stream added, broadcasting: 3\nI0826 23:42:15.620637    3705 log.go:172] (0xc000aeae70) Reply frame received for 3\nI0826 23:42:15.620676    3705 log.go:172] (0xc000aeae70) (0xc000adc640) Create stream\nI0826 23:42:15.620690    3705 log.go:172] (0xc000aeae70) (0xc000adc640) Stream added, broadcasting: 5\nI0826 23:42:15.621673    3705 log.go:172] (0xc000aeae70) Reply frame received for 5\nI0826 23:42:15.682220    3705 log.go:172] (0xc000aeae70) Data frame received for 3\nI0826 23:42:15.682260    3705 log.go:172] (0xc000a3c460) (3) Data frame handling\nI0826 23:42:15.682274    3705 log.go:172] (0xc000a3c460) (3) Data frame sent\nI0826 23:42:15.682282    3705 log.go:172] (0xc000aeae70) Data frame received for 3\nI0826 23:42:15.682287    3705 log.go:172] (0xc000a3c460) (3) Data frame handling\nI0826 23:42:15.682364    3705 log.go:172] (0xc000aeae70) Data frame received for 5\nI0826 23:42:15.682392    3705 log.go:172] (0xc000adc640) (5) Data frame handling\nI0826 23:42:15.682405    3705 log.go:172] (0xc000adc640) (5) Data frame sent\nI0826 23:42:15.682420    3705 log.go:172] (0xc000aeae70) Data frame received for 5\nI0826 23:42:15.682429    3705 log.go:172] (0xc000adc640) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0826 23:42:15.683555    3705 log.go:172] (0xc000aeae70) Data frame received for 1\nI0826 23:42:15.683579    3705 log.go:172] (0xc000ad2460) (1) Data frame handling\nI0826 23:42:15.683594    3705 log.go:172] (0xc000ad2460) (1) Data frame sent\nI0826 23:42:15.683606    3705 log.go:172] (0xc000aeae70) (0xc000ad2460) Stream removed, broadcasting: 1\nI0826 23:42:15.683656    3705 log.go:172] (0xc000aeae70) Go away received\nI0826 23:42:15.683953    3705 log.go:172] (0xc000aeae70) (0xc000ad2460) Stream removed, broadcasting: 1\nI0826 23:42:15.683967    3705 log.go:172] (0xc000aeae70) (0xc000a3c460) Stream removed, broadcasting: 3\nI0826 23:42:15.683974    3705 log.go:172] (0xc000aeae70) (0xc000adc640) Stream removed, broadcasting: 5\n"
Aug 26 23:42:15.693: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Aug 26 23:42:15.693: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Aug 26 23:42:25.712: INFO: Waiting for StatefulSet statefulset-4415/ss2 to complete update
Aug 26 23:42:25.712: INFO: Waiting for Pod statefulset-4415/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Aug 26 23:42:25.712: INFO: Waiting for Pod statefulset-4415/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Aug 26 23:42:35.799: INFO: Waiting for StatefulSet statefulset-4415/ss2 to complete update
Aug 26 23:42:35.800: INFO: Waiting for Pod statefulset-4415/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Aug 26 23:42:45.718: INFO: Waiting for StatefulSet statefulset-4415/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Aug 26 23:42:55.720: INFO: Deleting all statefulset in ns statefulset-4415
Aug 26 23:42:55.723: INFO: Scaling statefulset ss2 to 0
Aug 26 23:43:25.757: INFO: Waiting for statefulset status.replicas updated to 0
Aug 26 23:43:25.759: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:43:25.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-4415" for this suite.

• [SLOW TEST:164.669 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    should perform rolling updates and roll backs of template modifications [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":278,"completed":213,"skipped":3459,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:43:25.797: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod busybox-2dbcd1c5-80b2-47df-b8e3-55f0122c76d7 in namespace container-probe-811
Aug 26 23:43:31.922: INFO: Started pod busybox-2dbcd1c5-80b2-47df-b8e3-55f0122c76d7 in namespace container-probe-811
STEP: checking the pod's current state and verifying that restartCount is present
Aug 26 23:43:31.937: INFO: Initial restart count of pod busybox-2dbcd1c5-80b2-47df-b8e3-55f0122c76d7 is 0
Aug 26 23:44:22.274: INFO: Restart count of pod container-probe-811/busybox-2dbcd1c5-80b2-47df-b8e3-55f0122c76d7 is now 1 (50.337178974s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:44:22.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-811" for this suite.

• [SLOW TEST:56.623 seconds]
[k8s.io] Probing container
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":214,"skipped":3474,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:44:22.421: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Aug 26 23:44:22.503: INFO: Waiting up to 5m0s for pod "downward-api-a0c2d244-8b19-4f5d-9a2b-a7a5e96c3ef4" in namespace "downward-api-3456" to be "success or failure"
Aug 26 23:44:22.507: INFO: Pod "downward-api-a0c2d244-8b19-4f5d-9a2b-a7a5e96c3ef4": Phase="Pending", Reason="", readiness=false. Elapsed: 3.295961ms
Aug 26 23:44:24.563: INFO: Pod "downward-api-a0c2d244-8b19-4f5d-9a2b-a7a5e96c3ef4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060014392s
Aug 26 23:44:26.567: INFO: Pod "downward-api-a0c2d244-8b19-4f5d-9a2b-a7a5e96c3ef4": Phase="Running", Reason="", readiness=true. Elapsed: 4.063798476s
Aug 26 23:44:28.599: INFO: Pod "downward-api-a0c2d244-8b19-4f5d-9a2b-a7a5e96c3ef4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.095444561s
STEP: Saw pod success
Aug 26 23:44:28.599: INFO: Pod "downward-api-a0c2d244-8b19-4f5d-9a2b-a7a5e96c3ef4" satisfied condition "success or failure"
Aug 26 23:44:28.602: INFO: Trying to get logs from node jerma-worker2 pod downward-api-a0c2d244-8b19-4f5d-9a2b-a7a5e96c3ef4 container dapi-container: 
STEP: delete the pod
Aug 26 23:44:28.660: INFO: Waiting for pod downward-api-a0c2d244-8b19-4f5d-9a2b-a7a5e96c3ef4 to disappear
Aug 26 23:44:28.674: INFO: Pod downward-api-a0c2d244-8b19-4f5d-9a2b-a7a5e96c3ef4 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:44:28.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3456" for this suite.

• [SLOW TEST:6.259 seconds]
[sig-node] Downward API
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33
  should provide host IP as an env var [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":278,"completed":215,"skipped":3496,"failed":0}
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:44:28.681: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a secret. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Discovering how many secrets are in namespace by default
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Secret
STEP: Ensuring resource quota status captures secret creation
STEP: Deleting a secret
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:44:45.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-9317" for this suite.

• [SLOW TEST:17.172 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":278,"completed":216,"skipped":3496,"failed":0}
SSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:44:45.853: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-3adb2e91-6ef3-4672-b80b-86a544433b38
STEP: Creating a pod to test consume secrets
Aug 26 23:44:45.970: INFO: Waiting up to 5m0s for pod "pod-secrets-474b4eb2-9aee-4e88-82be-71a05995963c" in namespace "secrets-4542" to be "success or failure"
Aug 26 23:44:45.980: INFO: Pod "pod-secrets-474b4eb2-9aee-4e88-82be-71a05995963c": Phase="Pending", Reason="", readiness=false. Elapsed: 9.763506ms
Aug 26 23:44:47.984: INFO: Pod "pod-secrets-474b4eb2-9aee-4e88-82be-71a05995963c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014093433s
Aug 26 23:44:49.988: INFO: Pod "pod-secrets-474b4eb2-9aee-4e88-82be-71a05995963c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017808889s
STEP: Saw pod success
Aug 26 23:44:49.988: INFO: Pod "pod-secrets-474b4eb2-9aee-4e88-82be-71a05995963c" satisfied condition "success or failure"
Aug 26 23:44:49.990: INFO: Trying to get logs from node jerma-worker pod pod-secrets-474b4eb2-9aee-4e88-82be-71a05995963c container secret-env-test: 
STEP: delete the pod
Aug 26 23:44:50.023: INFO: Waiting for pod pod-secrets-474b4eb2-9aee-4e88-82be-71a05995963c to disappear
Aug 26 23:44:50.034: INFO: Pod pod-secrets-474b4eb2-9aee-4e88-82be-71a05995963c no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:44:50.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4542" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":278,"completed":217,"skipped":3499,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a container with runAsUser 
  should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Security Context
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:44:50.042: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39
[It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 26 23:44:50.135: INFO: Waiting up to 5m0s for pod "busybox-user-65534-7b4e244d-f104-4665-af19-664628456560" in namespace "security-context-test-9029" to be "success or failure"
Aug 26 23:44:50.148: INFO: Pod "busybox-user-65534-7b4e244d-f104-4665-af19-664628456560": Phase="Pending", Reason="", readiness=false. Elapsed: 12.369884ms
Aug 26 23:44:52.151: INFO: Pod "busybox-user-65534-7b4e244d-f104-4665-af19-664628456560": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01587075s
Aug 26 23:44:54.156: INFO: Pod "busybox-user-65534-7b4e244d-f104-4665-af19-664628456560": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020001932s
Aug 26 23:44:54.156: INFO: Pod "busybox-user-65534-7b4e244d-f104-4665-af19-664628456560" satisfied condition "success or failure"
[AfterEach] [k8s.io] Security Context
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:44:54.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-9029" for this suite.
•{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":218,"skipped":3524,"failed":0}
SSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:44:54.165: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name projected-secret-test-9158d03e-a073-4fa4-986c-6072bbbc8d62
STEP: Creating a pod to test consume secrets
Aug 26 23:44:54.263: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-49de8f95-bd14-4be6-abd6-305bdd86affa" in namespace "projected-3701" to be "success or failure"
Aug 26 23:44:54.285: INFO: Pod "pod-projected-secrets-49de8f95-bd14-4be6-abd6-305bdd86affa": Phase="Pending", Reason="", readiness=false. Elapsed: 22.105225ms
Aug 26 23:44:56.300: INFO: Pod "pod-projected-secrets-49de8f95-bd14-4be6-abd6-305bdd86affa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036743431s
Aug 26 23:44:58.304: INFO: Pod "pod-projected-secrets-49de8f95-bd14-4be6-abd6-305bdd86affa": Phase="Running", Reason="", readiness=true. Elapsed: 4.041481103s
Aug 26 23:45:00.308: INFO: Pod "pod-projected-secrets-49de8f95-bd14-4be6-abd6-305bdd86affa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.044975311s
STEP: Saw pod success
Aug 26 23:45:00.308: INFO: Pod "pod-projected-secrets-49de8f95-bd14-4be6-abd6-305bdd86affa" satisfied condition "success or failure"
Aug 26 23:45:00.311: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-49de8f95-bd14-4be6-abd6-305bdd86affa container projected-secret-volume-test: 
STEP: delete the pod
Aug 26 23:45:00.340: INFO: Waiting for pod pod-projected-secrets-49de8f95-bd14-4be6-abd6-305bdd86affa to disappear
Aug 26 23:45:00.351: INFO: Pod pod-projected-secrets-49de8f95-bd14-4be6-abd6-305bdd86affa no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:45:00.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3701" for this suite.

• [SLOW TEST:6.194 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":219,"skipped":3528,"failed":0}
SSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicaSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:45:00.359: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 26 23:45:00.431: INFO: Creating ReplicaSet my-hostname-basic-8bb46de2-e1b3-43c4-80a2-cccdf73187b7
Aug 26 23:45:00.453: INFO: Pod name my-hostname-basic-8bb46de2-e1b3-43c4-80a2-cccdf73187b7: Found 0 pods out of 1
Aug 26 23:45:05.474: INFO: Pod name my-hostname-basic-8bb46de2-e1b3-43c4-80a2-cccdf73187b7: Found 1 pods out of 1
Aug 26 23:45:05.474: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-8bb46de2-e1b3-43c4-80a2-cccdf73187b7" is running
Aug 26 23:45:05.477: INFO: Pod "my-hostname-basic-8bb46de2-e1b3-43c4-80a2-cccdf73187b7-c5zqt" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-26 23:45:00 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-26 23:45:03 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-26 23:45:03 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-26 23:45:00 +0000 UTC Reason: Message:}])
Aug 26 23:45:05.477: INFO: Trying to dial the pod
Aug 26 23:45:10.488: INFO: Controller my-hostname-basic-8bb46de2-e1b3-43c4-80a2-cccdf73187b7: Got expected result from replica 1 [my-hostname-basic-8bb46de2-e1b3-43c4-80a2-cccdf73187b7-c5zqt]: "my-hostname-basic-8bb46de2-e1b3-43c4-80a2-cccdf73187b7-c5zqt", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:45:10.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-5189" for this suite.

• [SLOW TEST:10.139 seconds]
[sig-apps] ReplicaSet
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","total":278,"completed":220,"skipped":3534,"failed":0}
SS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:45:10.498: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-f3aeeb68-34b6-4438-99e7-a5ede64fd320
STEP: Creating a pod to test consume secrets
Aug 26 23:45:10.651: INFO: Waiting up to 5m0s for pod "pod-secrets-910efde6-61bc-455d-8876-28ab10654804" in namespace "secrets-6671" to be "success or failure"
Aug 26 23:45:10.657: INFO: Pod "pod-secrets-910efde6-61bc-455d-8876-28ab10654804": Phase="Pending", Reason="", readiness=false. Elapsed: 6.577508ms
Aug 26 23:45:12.661: INFO: Pod "pod-secrets-910efde6-61bc-455d-8876-28ab10654804": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010116981s
Aug 26 23:45:14.664: INFO: Pod "pod-secrets-910efde6-61bc-455d-8876-28ab10654804": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013484559s
STEP: Saw pod success
Aug 26 23:45:14.664: INFO: Pod "pod-secrets-910efde6-61bc-455d-8876-28ab10654804" satisfied condition "success or failure"
Aug 26 23:45:14.667: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-910efde6-61bc-455d-8876-28ab10654804 container secret-volume-test: 
STEP: delete the pod
Aug 26 23:45:14.688: INFO: Waiting for pod pod-secrets-910efde6-61bc-455d-8876-28ab10654804 to disappear
Aug 26 23:45:14.693: INFO: Pod pod-secrets-910efde6-61bc-455d-8876-28ab10654804 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:45:14.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-6671" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":221,"skipped":3536,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run job 
  should create a job from an image when restart is OnFailure [Deprecated] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:45:14.702: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Kubectl run job
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685
[It] should create a job from an image when restart is OnFailure [Deprecated] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Aug 26 23:45:14.797: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-4442'
Aug 26 23:45:14.906: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Aug 26 23:45:14.906: INFO: stdout: "job.batch/e2e-test-httpd-job created\n"
STEP: verifying the job e2e-test-httpd-job was created
[AfterEach] Kubectl run job
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690
Aug 26 23:45:14.934: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-httpd-job --namespace=kubectl-4442'
Aug 26 23:45:15.036: INFO: stderr: ""
Aug 26 23:45:15.036: INFO: stdout: "job.batch \"e2e-test-httpd-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:45:15.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4442" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Deprecated] [Conformance]","total":278,"completed":222,"skipped":3588,"failed":0}
SS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:45:15.065: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Aug 26 23:45:15.135: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-4652 /api/v1/namespaces/watch-4652/configmaps/e2e-watch-test-configmap-a a49d66a5-5eb6-4620-b46e-522acc1a3fea 4048904 0 2020-08-26 23:45:15 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Aug 26 23:45:15.135: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-4652 /api/v1/namespaces/watch-4652/configmaps/e2e-watch-test-configmap-a a49d66a5-5eb6-4620-b46e-522acc1a3fea 4048904 0 2020-08-26 23:45:15 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Aug 26 23:45:25.143: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-4652 /api/v1/namespaces/watch-4652/configmaps/e2e-watch-test-configmap-a a49d66a5-5eb6-4620-b46e-522acc1a3fea 4048966 0 2020-08-26 23:45:15 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Aug 26 23:45:25.143: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-4652 /api/v1/namespaces/watch-4652/configmaps/e2e-watch-test-configmap-a a49d66a5-5eb6-4620-b46e-522acc1a3fea 4048966 0 2020-08-26 23:45:15 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Aug 26 23:45:35.151: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-4652 /api/v1/namespaces/watch-4652/configmaps/e2e-watch-test-configmap-a a49d66a5-5eb6-4620-b46e-522acc1a3fea 4048996 0 2020-08-26 23:45:15 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Aug 26 23:45:35.151: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-4652 /api/v1/namespaces/watch-4652/configmaps/e2e-watch-test-configmap-a a49d66a5-5eb6-4620-b46e-522acc1a3fea 4048996 0 2020-08-26 23:45:15 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Aug 26 23:45:45.158: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-4652 /api/v1/namespaces/watch-4652/configmaps/e2e-watch-test-configmap-a a49d66a5-5eb6-4620-b46e-522acc1a3fea 4049026 0 2020-08-26 23:45:15 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Aug 26 23:45:45.159: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-4652 /api/v1/namespaces/watch-4652/configmaps/e2e-watch-test-configmap-a a49d66a5-5eb6-4620-b46e-522acc1a3fea 4049026 0 2020-08-26 23:45:15 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Aug 26 23:45:55.168: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-4652 /api/v1/namespaces/watch-4652/configmaps/e2e-watch-test-configmap-b 9f313cf8-870f-4b9d-88d7-51f22f6f117a 4049056 0 2020-08-26 23:45:55 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Aug 26 23:45:55.169: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-4652 /api/v1/namespaces/watch-4652/configmaps/e2e-watch-test-configmap-b 9f313cf8-870f-4b9d-88d7-51f22f6f117a 4049056 0 2020-08-26 23:45:55 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Aug 26 23:46:05.176: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-4652 /api/v1/namespaces/watch-4652/configmaps/e2e-watch-test-configmap-b 9f313cf8-870f-4b9d-88d7-51f22f6f117a 4049086 0 2020-08-26 23:45:55 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Aug 26 23:46:05.176: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-4652 /api/v1/namespaces/watch-4652/configmaps/e2e-watch-test-configmap-b 9f313cf8-870f-4b9d-88d7-51f22f6f117a 4049086 0 2020-08-26 23:45:55 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:46:15.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-4652" for this suite.

• [SLOW TEST:60.122 seconds]
[sig-api-machinery] Watchers
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":278,"completed":223,"skipped":3590,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:46:15.187: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name projected-secret-test-0057275b-923a-4be2-a43b-48fdf63c3dd6
STEP: Creating a pod to test consume secrets
Aug 26 23:46:15.284: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-301e0387-1a2c-42e2-ac85-71a30dcf94cb" in namespace "projected-2814" to be "success or failure"
Aug 26 23:46:15.317: INFO: Pod "pod-projected-secrets-301e0387-1a2c-42e2-ac85-71a30dcf94cb": Phase="Pending", Reason="", readiness=false. Elapsed: 33.315211ms
Aug 26 23:46:17.371: INFO: Pod "pod-projected-secrets-301e0387-1a2c-42e2-ac85-71a30dcf94cb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.086918191s
Aug 26 23:46:19.376: INFO: Pod "pod-projected-secrets-301e0387-1a2c-42e2-ac85-71a30dcf94cb": Phase="Running", Reason="", readiness=true. Elapsed: 4.091455465s
Aug 26 23:46:21.380: INFO: Pod "pod-projected-secrets-301e0387-1a2c-42e2-ac85-71a30dcf94cb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.095781926s
STEP: Saw pod success
Aug 26 23:46:21.380: INFO: Pod "pod-projected-secrets-301e0387-1a2c-42e2-ac85-71a30dcf94cb" satisfied condition "success or failure"
Aug 26 23:46:21.383: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-301e0387-1a2c-42e2-ac85-71a30dcf94cb container projected-secret-volume-test: 
STEP: delete the pod
Aug 26 23:46:21.432: INFO: Waiting for pod pod-projected-secrets-301e0387-1a2c-42e2-ac85-71a30dcf94cb to disappear
Aug 26 23:46:21.436: INFO: Pod pod-projected-secrets-301e0387-1a2c-42e2-ac85-71a30dcf94cb no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:46:21.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2814" for this suite.

• [SLOW TEST:6.279 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":224,"skipped":3625,"failed":0}
SSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:46:21.466: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide container's memory request [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 26 23:46:21.551: INFO: Waiting up to 5m0s for pod "downwardapi-volume-96054906-4106-4a93-89f4-9144b243c607" in namespace "projected-1449" to be "success or failure"
Aug 26 23:46:21.595: INFO: Pod "downwardapi-volume-96054906-4106-4a93-89f4-9144b243c607": Phase="Pending", Reason="", readiness=false. Elapsed: 43.514929ms
Aug 26 23:46:23.599: INFO: Pod "downwardapi-volume-96054906-4106-4a93-89f4-9144b243c607": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047731203s
Aug 26 23:46:25.604: INFO: Pod "downwardapi-volume-96054906-4106-4a93-89f4-9144b243c607": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.052180807s
STEP: Saw pod success
Aug 26 23:46:25.604: INFO: Pod "downwardapi-volume-96054906-4106-4a93-89f4-9144b243c607" satisfied condition "success or failure"
Aug 26 23:46:25.606: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-96054906-4106-4a93-89f4-9144b243c607 container client-container: 
STEP: delete the pod
Aug 26 23:46:25.624: INFO: Waiting for pod downwardapi-volume-96054906-4106-4a93-89f4-9144b243c607 to disappear
Aug 26 23:46:25.628: INFO: Pod downwardapi-volume-96054906-4106-4a93-89f4-9144b243c607 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:46:25.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1449" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":225,"skipped":3630,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:46:25.640: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 26 23:46:25.744: INFO: Waiting up to 5m0s for pod "downwardapi-volume-344b3d6f-e26d-47be-9dce-ead0a0b5caea" in namespace "projected-6183" to be "success or failure"
Aug 26 23:46:25.765: INFO: Pod "downwardapi-volume-344b3d6f-e26d-47be-9dce-ead0a0b5caea": Phase="Pending", Reason="", readiness=false. Elapsed: 21.09821ms
Aug 26 23:46:27.769: INFO: Pod "downwardapi-volume-344b3d6f-e26d-47be-9dce-ead0a0b5caea": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02552764s
Aug 26 23:46:29.773: INFO: Pod "downwardapi-volume-344b3d6f-e26d-47be-9dce-ead0a0b5caea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029536214s
STEP: Saw pod success
Aug 26 23:46:29.773: INFO: Pod "downwardapi-volume-344b3d6f-e26d-47be-9dce-ead0a0b5caea" satisfied condition "success or failure"
Aug 26 23:46:29.776: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-344b3d6f-e26d-47be-9dce-ead0a0b5caea container client-container: 
STEP: delete the pod
Aug 26 23:46:29.897: INFO: Waiting for pod downwardapi-volume-344b3d6f-e26d-47be-9dce-ead0a0b5caea to disappear
Aug 26 23:46:30.103: INFO: Pod downwardapi-volume-344b3d6f-e26d-47be-9dce-ead0a0b5caea no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:46:30.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6183" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":226,"skipped":3658,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:46:30.113: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ReplicaSet
STEP: Ensuring resource quota status captures replicaset creation
STEP: Deleting a ReplicaSet
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:46:41.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-4566" for this suite.

• [SLOW TEST:11.256 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":278,"completed":227,"skipped":3672,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:46:41.370: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-7f49c0e7-5c5b-42eb-9ef6-3d595deaad3e
STEP: Creating a pod to test consume configMaps
Aug 26 23:46:41.464: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c7c07636-8b0b-4262-a06c-0297575fc9a8" in namespace "projected-2805" to be "success or failure"
Aug 26 23:46:41.468: INFO: Pod "pod-projected-configmaps-c7c07636-8b0b-4262-a06c-0297575fc9a8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.180943ms
Aug 26 23:46:43.486: INFO: Pod "pod-projected-configmaps-c7c07636-8b0b-4262-a06c-0297575fc9a8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022779937s
Aug 26 23:46:45.491: INFO: Pod "pod-projected-configmaps-c7c07636-8b0b-4262-a06c-0297575fc9a8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026954394s
STEP: Saw pod success
Aug 26 23:46:45.491: INFO: Pod "pod-projected-configmaps-c7c07636-8b0b-4262-a06c-0297575fc9a8" satisfied condition "success or failure"
Aug 26 23:46:45.494: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-c7c07636-8b0b-4262-a06c-0297575fc9a8 container projected-configmap-volume-test: 
STEP: delete the pod
Aug 26 23:46:45.545: INFO: Waiting for pod pod-projected-configmaps-c7c07636-8b0b-4262-a06c-0297575fc9a8 to disappear
Aug 26 23:46:45.551: INFO: Pod pod-projected-configmaps-c7c07636-8b0b-4262-a06c-0297575fc9a8 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:46:45.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2805" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":228,"skipped":3707,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected combined
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:46:45.563: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-projected-all-test-volume-97a9fa20-8ed4-49a6-bff0-0217a1e09edc
STEP: Creating secret with name secret-projected-all-test-volume-af2a6f23-6fec-478c-910c-eac561e10569
STEP: Creating a pod to test Check all projections for projected volume plugin
Aug 26 23:46:45.697: INFO: Waiting up to 5m0s for pod "projected-volume-62cd5c19-901d-4140-9a05-b346cc39ff89" in namespace "projected-600" to be "success or failure"
Aug 26 23:46:45.750: INFO: Pod "projected-volume-62cd5c19-901d-4140-9a05-b346cc39ff89": Phase="Pending", Reason="", readiness=false. Elapsed: 53.265008ms
Aug 26 23:46:47.900: INFO: Pod "projected-volume-62cd5c19-901d-4140-9a05-b346cc39ff89": Phase="Pending", Reason="", readiness=false. Elapsed: 2.203668784s
Aug 26 23:46:50.008: INFO: Pod "projected-volume-62cd5c19-901d-4140-9a05-b346cc39ff89": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.311195835s
STEP: Saw pod success
Aug 26 23:46:50.008: INFO: Pod "projected-volume-62cd5c19-901d-4140-9a05-b346cc39ff89" satisfied condition "success or failure"
Aug 26 23:46:50.012: INFO: Trying to get logs from node jerma-worker pod projected-volume-62cd5c19-901d-4140-9a05-b346cc39ff89 container projected-all-volume-test: 
STEP: delete the pod
Aug 26 23:46:50.259: INFO: Waiting for pod projected-volume-62cd5c19-901d-4140-9a05-b346cc39ff89 to disappear
Aug 26 23:46:50.262: INFO: Pod projected-volume-62cd5c19-901d-4140-9a05-b346cc39ff89 no longer exists
[AfterEach] [sig-storage] Projected combined
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:46:50.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-600" for this suite.
•{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":278,"completed":229,"skipped":3726,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl rolling-update 
  should support rolling-update to same image [Deprecated] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:46:50.270: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Kubectl rolling-update
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1587
[It] should support rolling-update to same image [Deprecated] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Aug 26 23:46:50.492: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-6046'
Aug 26 23:46:50.597: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Aug 26 23:46:50.597: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n"
STEP: verifying the rc e2e-test-httpd-rc was created
Aug 26 23:46:50.606: INFO: Waiting for rc e2e-test-httpd-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
Aug 26 23:46:50.617: INFO: Waiting for rc e2e-test-httpd-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
STEP: rolling-update to same image controller
Aug 26 23:46:50.680: INFO: scanned /root for discovery docs: 
Aug 26 23:46:50.680: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-httpd-rc --update-period=1s --image=docker.io/library/httpd:2.4.38-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-6046'
Aug 26 23:47:06.560: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Aug 26 23:47:06.560: INFO: stdout: "Created e2e-test-httpd-rc-0f3d692b6e99eef7b62770f86d4373d8\nScaling up e2e-test-httpd-rc-0f3d692b6e99eef7b62770f86d4373d8 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-0f3d692b6e99eef7b62770f86d4373d8 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-0f3d692b6e99eef7b62770f86d4373d8 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n"
Aug 26 23:47:06.560: INFO: stdout: "Created e2e-test-httpd-rc-0f3d692b6e99eef7b62770f86d4373d8\nScaling up e2e-test-httpd-rc-0f3d692b6e99eef7b62770f86d4373d8 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-0f3d692b6e99eef7b62770f86d4373d8 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-0f3d692b6e99eef7b62770f86d4373d8 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-httpd-rc pods to come up.
Aug 26 23:47:06.561: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-6046'
Aug 26 23:47:06.654: INFO: stderr: ""
Aug 26 23:47:06.654: INFO: stdout: "e2e-test-httpd-rc-0f3d692b6e99eef7b62770f86d4373d8-wnf94 "
Aug 26 23:47:06.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-0f3d692b6e99eef7b62770f86d4373d8-wnf94 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-httpd-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6046'
Aug 26 23:47:06.751: INFO: stderr: ""
Aug 26 23:47:06.751: INFO: stdout: "true"
Aug 26 23:47:06.751: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-0f3d692b6e99eef7b62770f86d4373d8-wnf94 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-httpd-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6046'
Aug 26 23:47:06.861: INFO: stderr: ""
Aug 26 23:47:06.861: INFO: stdout: "docker.io/library/httpd:2.4.38-alpine"
Aug 26 23:47:06.861: INFO: e2e-test-httpd-rc-0f3d692b6e99eef7b62770f86d4373d8-wnf94 is verified up and running
[AfterEach] Kubectl rolling-update
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1593
Aug 26 23:47:06.861: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-6046'
Aug 26 23:47:06.966: INFO: stderr: ""
Aug 26 23:47:06.966: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:47:06.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6046" for this suite.

• [SLOW TEST:16.782 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl rolling-update
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1582
    should support rolling-update to same image [Deprecated] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Deprecated] [Conformance]","total":278,"completed":230,"skipped":3759,"failed":0}
SSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period 
  should be submitted and removed [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:47:07.053: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Delete Grace Period
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:50
[It] should be submitted and removed [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: setting up selector
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
Aug 26 23:47:11.552: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0'
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
Aug 26 23:47:21.665: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:47:21.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-4535" for this suite.

• [SLOW TEST:14.626 seconds]
[k8s.io] [sig-node] Pods Extended
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  [k8s.io] Delete Grace Period
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    should be submitted and removed [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance]","total":278,"completed":231,"skipped":3772,"failed":0}
SSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:47:21.679: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:47:25.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-3191" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":232,"skipped":3782,"failed":0}
SSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of different groups [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:47:25.807: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of different groups [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation
Aug 26 23:47:25.932: INFO: >>> kubeConfig: /root/.kube/config
Aug 26 23:47:28.833: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:47:39.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-4299" for this suite.

• [SLOW TEST:13.404 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of different groups [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":278,"completed":233,"skipped":3790,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:47:39.211: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 26 23:47:39.253: INFO: Creating deployment "test-recreate-deployment"
Aug 26 23:47:39.271: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Aug 26 23:47:39.319: INFO: deployment "test-recreate-deployment" doesn't have the required revision set
Aug 26 23:47:41.325: INFO: Waiting deployment "test-recreate-deployment" to complete
Aug 26 23:47:41.326: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734082459, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734082459, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734082459, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734082459, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 23:47:43.330: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Aug 26 23:47:43.336: INFO: Updating deployment test-recreate-deployment
Aug 26 23:47:43.336: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Aug 26 23:47:43.862: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:{test-recreate-deployment  deployment-2055 /apis/apps/v1/namespaces/deployment-2055/deployments/test-recreate-deployment 1861e9b8-ecbe-4d08-bfd8-514cb4e13191 4049718 2 2020-08-26 23:47:39 +0000 UTC   map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004b907f8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-08-26 23:47:43 +0000 UTC,LastTransitionTime:2020-08-26 23:47:43 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-08-26 23:47:43 +0000 UTC,LastTransitionTime:2020-08-26 23:47:39 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},}

Aug 26 23:47:44.021: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff  deployment-2055 /apis/apps/v1/namespaces/deployment-2055/replicasets/test-recreate-deployment-5f94c574ff c935eee2-ae37-4a79-9ce3-f351b88f4c18 4049715 1 2020-08-26 23:47:43 +0000 UTC   map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 1861e9b8-ecbe-4d08-bfd8-514cb4e13191 0xc004b90d97 0xc004b90d98}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004b90e18  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Aug 26 23:47:44.021: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Aug 26 23:47:44.021: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-799c574856  deployment-2055 /apis/apps/v1/namespaces/deployment-2055/replicasets/test-recreate-deployment-799c574856 672fae31-ea82-4f49-b856-344534e3aac9 4049707 2 2020-08-26 23:47:39 +0000 UTC   map[name:sample-pod-3 pod-template-hash:799c574856] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 1861e9b8-ecbe-4d08-bfd8-514cb4e13191 0xc004b90ec7 0xc004b90ec8}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 799c574856,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3 pod-template-hash:799c574856] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004b90f38  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Aug 26 23:47:44.025: INFO: Pod "test-recreate-deployment-5f94c574ff-tplrb" is not available:
&Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-tplrb test-recreate-deployment-5f94c574ff- deployment-2055 /api/v1/namespaces/deployment-2055/pods/test-recreate-deployment-5f94c574ff-tplrb f7514ccc-f003-47fb-92e8-25f89442cf45 4049719 0 2020-08-26 23:47:43 +0000 UTC   map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff c935eee2-ae37-4a79-9ce3-f351b88f4c18 0xc004b91407 0xc004b91408}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-cqmw6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-cqmw6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-cqmw6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:47:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:47:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:47:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 23:47:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2020-08-26 23:47:43 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:47:44.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-2055" for this suite.
•{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":234,"skipped":3847,"failed":0}
SSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:47:44.036: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should rollback without unnecessary restarts [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 26 23:47:44.443: INFO: Create a RollingUpdate DaemonSet
Aug 26 23:47:44.453: INFO: Check that daemon pods launch on every node of the cluster
Aug 26 23:47:44.476: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 23:47:44.566: INFO: Number of nodes with available pods: 0
Aug 26 23:47:44.566: INFO: Node jerma-worker is running more than one daemon pod
Aug 26 23:47:45.571: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 23:47:45.575: INFO: Number of nodes with available pods: 0
Aug 26 23:47:45.575: INFO: Node jerma-worker is running more than one daemon pod
Aug 26 23:47:46.571: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 23:47:46.574: INFO: Number of nodes with available pods: 0
Aug 26 23:47:46.574: INFO: Node jerma-worker is running more than one daemon pod
Aug 26 23:47:47.571: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 23:47:47.573: INFO: Number of nodes with available pods: 0
Aug 26 23:47:47.574: INFO: Node jerma-worker is running more than one daemon pod
Aug 26 23:47:48.570: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 23:47:48.573: INFO: Number of nodes with available pods: 0
Aug 26 23:47:48.573: INFO: Node jerma-worker is running more than one daemon pod
Aug 26 23:47:49.604: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 23:47:49.931: INFO: Number of nodes with available pods: 2
Aug 26 23:47:49.931: INFO: Number of running nodes: 2, number of available pods: 2
Aug 26 23:47:49.931: INFO: Update the DaemonSet to trigger a rollout
Aug 26 23:47:49.959: INFO: Updating DaemonSet daemon-set
Aug 26 23:48:02.238: INFO: Roll back the DaemonSet before rollout is complete
Aug 26 23:48:02.271: INFO: Updating DaemonSet daemon-set
Aug 26 23:48:02.271: INFO: Make sure DaemonSet rollback is complete
Aug 26 23:48:02.277: INFO: Wrong image for pod: daemon-set-qbt8m. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Aug 26 23:48:02.277: INFO: Pod daemon-set-qbt8m is not available
Aug 26 23:48:02.283: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 23:48:03.286: INFO: Wrong image for pod: daemon-set-qbt8m. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Aug 26 23:48:03.286: INFO: Pod daemon-set-qbt8m is not available
Aug 26 23:48:03.290: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 23:48:04.434: INFO: Pod daemon-set-q85kf is not available
Aug 26 23:48:04.442: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5088, will wait for the garbage collector to delete the pods
Aug 26 23:48:04.528: INFO: Deleting DaemonSet.extensions daemon-set took: 6.690334ms
Aug 26 23:48:04.628: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.237294ms
Aug 26 23:48:11.912: INFO: Number of nodes with available pods: 0
Aug 26 23:48:11.912: INFO: Number of running nodes: 0, number of available pods: 0
Aug 26 23:48:11.914: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5088/daemonsets","resourceVersion":"4049911"},"items":null}

Aug 26 23:48:11.917: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5088/pods","resourceVersion":"4049911"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:48:11.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-5088" for this suite.

• [SLOW TEST:27.895 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should rollback without unnecessary restarts [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":278,"completed":235,"skipped":3851,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should have a working scale subresource [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:48:11.931: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-8443
[It] should have a working scale subresource [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating statefulset ss in namespace statefulset-8443
Aug 26 23:48:12.925: INFO: Found 0 stateful pods, waiting for 1
Aug 26 23:48:22.930: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: getting scale subresource
STEP: updating a scale subresource
STEP: verifying the statefulset Spec.Replicas was modified
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Aug 26 23:48:22.954: INFO: Deleting all statefulset in ns statefulset-8443
Aug 26 23:48:22.961: INFO: Scaling statefulset ss to 0
Aug 26 23:48:33.012: INFO: Waiting for statefulset status.replicas updated to 0
Aug 26 23:48:33.015: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:48:33.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-8443" for this suite.

• [SLOW TEST:21.107 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    should have a working scale subresource [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":278,"completed":236,"skipped":3868,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:48:33.039: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should serve multiport endpoints from pods  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating service multi-endpoint-test in namespace services-8072
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8072 to expose endpoints map[]
Aug 26 23:48:33.220: INFO: successfully validated that service multi-endpoint-test in namespace services-8072 exposes endpoints map[] (74.497373ms elapsed)
STEP: Creating pod pod1 in namespace services-8072
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8072 to expose endpoints map[pod1:[100]]
Aug 26 23:48:37.273: INFO: successfully validated that service multi-endpoint-test in namespace services-8072 exposes endpoints map[pod1:[100]] (4.045261696s elapsed)
STEP: Creating pod pod2 in namespace services-8072
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8072 to expose endpoints map[pod1:[100] pod2:[101]]
Aug 26 23:48:40.462: INFO: successfully validated that service multi-endpoint-test in namespace services-8072 exposes endpoints map[pod1:[100] pod2:[101]] (3.186404019s elapsed)
STEP: Deleting pod pod1 in namespace services-8072
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8072 to expose endpoints map[pod2:[101]]
Aug 26 23:48:41.521: INFO: successfully validated that service multi-endpoint-test in namespace services-8072 exposes endpoints map[pod2:[101]] (1.055987816s elapsed)
STEP: Deleting pod pod2 in namespace services-8072
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8072 to expose endpoints map[]
Aug 26 23:48:42.535: INFO: successfully validated that service multi-endpoint-test in namespace services-8072 exposes endpoints map[] (1.008427948s elapsed)
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:48:42.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-8072" for this suite.
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:9.650 seconds]
[sig-network] Services
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods  [Conformance]","total":278,"completed":237,"skipped":3900,"failed":0}
SSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:48:42.689: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 26 23:49:08.783: INFO: Container started at 2020-08-26 23:48:45 +0000 UTC, pod became ready at 2020-08-26 23:49:08 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:49:08.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-4757" for this suite.

• [SLOW TEST:26.102 seconds]
[k8s.io] Probing container
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":278,"completed":238,"skipped":3907,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert from CR v1 to CR v2 [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:49:08.792: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125
STEP: Setting up server cert
STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication
STEP: Deploying the custom resource conversion webhook pod
STEP: Wait for the deployment to be ready
Aug 26 23:49:09.348: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set
Aug 26 23:49:11.544: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734082549, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734082549, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734082549, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734082549, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 23:49:13.632: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734082549, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734082549, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734082549, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734082549, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 26 23:49:16.663: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1
[It] should be able to convert from CR v1 to CR v2 [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 26 23:49:16.666: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating a v1 custom resource
STEP: v2 custom resource should be converted
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:49:17.861: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-7089" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136

• [SLOW TEST:9.177 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert from CR v1 to CR v2 [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":278,"completed":239,"skipped":3947,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Docker Containers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:49:17.969: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test override arguments
Aug 26 23:49:18.078: INFO: Waiting up to 5m0s for pod "client-containers-5eec1bd7-9a71-4470-b726-680b502016dc" in namespace "containers-5518" to be "success or failure"
Aug 26 23:49:18.081: INFO: Pod "client-containers-5eec1bd7-9a71-4470-b726-680b502016dc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.649481ms
Aug 26 23:49:20.113: INFO: Pod "client-containers-5eec1bd7-9a71-4470-b726-680b502016dc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034800784s
Aug 26 23:49:22.117: INFO: Pod "client-containers-5eec1bd7-9a71-4470-b726-680b502016dc": Phase="Running", Reason="", readiness=true. Elapsed: 4.038300941s
Aug 26 23:49:24.142: INFO: Pod "client-containers-5eec1bd7-9a71-4470-b726-680b502016dc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.063658025s
STEP: Saw pod success
Aug 26 23:49:24.142: INFO: Pod "client-containers-5eec1bd7-9a71-4470-b726-680b502016dc" satisfied condition "success or failure"
Aug 26 23:49:24.145: INFO: Trying to get logs from node jerma-worker2 pod client-containers-5eec1bd7-9a71-4470-b726-680b502016dc container test-container: 
STEP: delete the pod
Aug 26 23:49:24.476: INFO: Waiting for pod client-containers-5eec1bd7-9a71-4470-b726-680b502016dc to disappear
Aug 26 23:49:24.485: INFO: Pod client-containers-5eec1bd7-9a71-4470-b726-680b502016dc no longer exists
[AfterEach] [k8s.io] Docker Containers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:49:24.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-5518" for this suite.

• [SLOW TEST:6.524 seconds]
[k8s.io] Docker Containers
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":278,"completed":240,"skipped":3973,"failed":0}
S
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:49:24.493: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:49:31.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-1386" for this suite.

• [SLOW TEST:7.210 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":278,"completed":241,"skipped":3974,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should adopt matching orphans and release non-matching pods [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Job
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:49:31.705: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching orphans and release non-matching pods [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: Orphaning one of the Job's Pods
Aug 26 23:49:38.360: INFO: Successfully updated pod "adopt-release-jwmjb"
STEP: Checking that the Job readopts the Pod
Aug 26 23:49:38.360: INFO: Waiting up to 15m0s for pod "adopt-release-jwmjb" in namespace "job-900" to be "adopted"
Aug 26 23:49:38.416: INFO: Pod "adopt-release-jwmjb": Phase="Running", Reason="", readiness=true. Elapsed: 56.344555ms
Aug 26 23:49:40.420: INFO: Pod "adopt-release-jwmjb": Phase="Running", Reason="", readiness=true. Elapsed: 2.060054182s
Aug 26 23:49:40.420: INFO: Pod "adopt-release-jwmjb" satisfied condition "adopted"
STEP: Removing the labels from the Job's Pod
Aug 26 23:49:40.928: INFO: Successfully updated pod "adopt-release-jwmjb"
STEP: Checking that the Job releases the Pod
Aug 26 23:49:40.928: INFO: Waiting up to 15m0s for pod "adopt-release-jwmjb" in namespace "job-900" to be "released"
Aug 26 23:49:40.948: INFO: Pod "adopt-release-jwmjb": Phase="Running", Reason="", readiness=true. Elapsed: 19.784044ms
Aug 26 23:49:43.033: INFO: Pod "adopt-release-jwmjb": Phase="Running", Reason="", readiness=true. Elapsed: 2.104671271s
Aug 26 23:49:43.033: INFO: Pod "adopt-release-jwmjb" satisfied condition "released"
[AfterEach] [sig-apps] Job
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:49:43.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-900" for this suite.

• [SLOW TEST:11.355 seconds]
[sig-apps] Job
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching orphans and release non-matching pods [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":278,"completed":242,"skipped":4033,"failed":0}
SSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:49:43.060: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 26 23:49:43.113: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6982c718-ec4d-4bd6-8df2-b48005736965" in namespace "downward-api-5993" to be "success or failure"
Aug 26 23:49:43.231: INFO: Pod "downwardapi-volume-6982c718-ec4d-4bd6-8df2-b48005736965": Phase="Pending", Reason="", readiness=false. Elapsed: 117.778373ms
Aug 26 23:49:45.297: INFO: Pod "downwardapi-volume-6982c718-ec4d-4bd6-8df2-b48005736965": Phase="Pending", Reason="", readiness=false. Elapsed: 2.183230182s
Aug 26 23:49:47.300: INFO: Pod "downwardapi-volume-6982c718-ec4d-4bd6-8df2-b48005736965": Phase="Pending", Reason="", readiness=false. Elapsed: 4.186722746s
Aug 26 23:49:49.305: INFO: Pod "downwardapi-volume-6982c718-ec4d-4bd6-8df2-b48005736965": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.191244975s
STEP: Saw pod success
Aug 26 23:49:49.305: INFO: Pod "downwardapi-volume-6982c718-ec4d-4bd6-8df2-b48005736965" satisfied condition "success or failure"
Aug 26 23:49:49.307: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-6982c718-ec4d-4bd6-8df2-b48005736965 container client-container: 
STEP: delete the pod
Aug 26 23:49:49.353: INFO: Waiting for pod downwardapi-volume-6982c718-ec4d-4bd6-8df2-b48005736965 to disappear
Aug 26 23:49:49.363: INFO: Pod downwardapi-volume-6982c718-ec4d-4bd6-8df2-b48005736965 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:49:49.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5993" for this suite.

• [SLOW TEST:6.339 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":243,"skipped":4039,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:49:49.400: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide container's memory request [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 26 23:49:49.487: INFO: Waiting up to 5m0s for pod "downwardapi-volume-30cfa5f4-b852-4e2e-9216-b96606881306" in namespace "downward-api-9394" to be "success or failure"
Aug 26 23:49:49.598: INFO: Pod "downwardapi-volume-30cfa5f4-b852-4e2e-9216-b96606881306": Phase="Pending", Reason="", readiness=false. Elapsed: 111.755192ms
Aug 26 23:49:51.998: INFO: Pod "downwardapi-volume-30cfa5f4-b852-4e2e-9216-b96606881306": Phase="Pending", Reason="", readiness=false. Elapsed: 2.511049097s
Aug 26 23:49:54.001: INFO: Pod "downwardapi-volume-30cfa5f4-b852-4e2e-9216-b96606881306": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.514594553s
STEP: Saw pod success
Aug 26 23:49:54.001: INFO: Pod "downwardapi-volume-30cfa5f4-b852-4e2e-9216-b96606881306" satisfied condition "success or failure"
Aug 26 23:49:54.003: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-30cfa5f4-b852-4e2e-9216-b96606881306 container client-container: 
STEP: delete the pod
Aug 26 23:49:54.045: INFO: Waiting for pod downwardapi-volume-30cfa5f4-b852-4e2e-9216-b96606881306 to disappear
Aug 26 23:49:54.081: INFO: Pod downwardapi-volume-30cfa5f4-b852-4e2e-9216-b96606881306 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:49:54.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9394" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":244,"skipped":4067,"failed":0}
SSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:49:54.087: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-map-d975396c-4721-49ca-bcab-6c3f6190581f
STEP: Creating a pod to test consume configMaps
Aug 26 23:49:54.212: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-cc09e685-1edc-4454-a57e-18ab5534c7d2" in namespace "projected-2542" to be "success or failure"
Aug 26 23:49:54.217: INFO: Pod "pod-projected-configmaps-cc09e685-1edc-4454-a57e-18ab5534c7d2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.830241ms
Aug 26 23:49:56.350: INFO: Pod "pod-projected-configmaps-cc09e685-1edc-4454-a57e-18ab5534c7d2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.138199807s
Aug 26 23:49:58.434: INFO: Pod "pod-projected-configmaps-cc09e685-1edc-4454-a57e-18ab5534c7d2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.222499123s
Aug 26 23:50:00.439: INFO: Pod "pod-projected-configmaps-cc09e685-1edc-4454-a57e-18ab5534c7d2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.227394544s
STEP: Saw pod success
Aug 26 23:50:00.439: INFO: Pod "pod-projected-configmaps-cc09e685-1edc-4454-a57e-18ab5534c7d2" satisfied condition "success or failure"
Aug 26 23:50:00.442: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-cc09e685-1edc-4454-a57e-18ab5534c7d2 container projected-configmap-volume-test: 
STEP: delete the pod
Aug 26 23:50:00.470: INFO: Waiting for pod pod-projected-configmaps-cc09e685-1edc-4454-a57e-18ab5534c7d2 to disappear
Aug 26 23:50:00.474: INFO: Pod pod-projected-configmaps-cc09e685-1edc-4454-a57e-18ab5534c7d2 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:50:00.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2542" for this suite.

• [SLOW TEST:6.395 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":245,"skipped":4074,"failed":0}
SSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Docker Containers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:50:00.482: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test override all
Aug 26 23:50:00.596: INFO: Waiting up to 5m0s for pod "client-containers-f684ff19-460e-4d05-8d8a-8de66e5b1f76" in namespace "containers-8677" to be "success or failure"
Aug 26 23:50:00.628: INFO: Pod "client-containers-f684ff19-460e-4d05-8d8a-8de66e5b1f76": Phase="Pending", Reason="", readiness=false. Elapsed: 31.558431ms
Aug 26 23:50:02.632: INFO: Pod "client-containers-f684ff19-460e-4d05-8d8a-8de66e5b1f76": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035640121s
Aug 26 23:50:04.636: INFO: Pod "client-containers-f684ff19-460e-4d05-8d8a-8de66e5b1f76": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03993186s
STEP: Saw pod success
Aug 26 23:50:04.636: INFO: Pod "client-containers-f684ff19-460e-4d05-8d8a-8de66e5b1f76" satisfied condition "success or failure"
Aug 26 23:50:04.640: INFO: Trying to get logs from node jerma-worker2 pod client-containers-f684ff19-460e-4d05-8d8a-8de66e5b1f76 container test-container: 
STEP: delete the pod
Aug 26 23:50:04.808: INFO: Waiting for pod client-containers-f684ff19-460e-4d05-8d8a-8de66e5b1f76 to disappear
Aug 26 23:50:04.903: INFO: Pod client-containers-f684ff19-460e-4d05-8d8a-8de66e5b1f76 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:50:04.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-8677" for this suite.
•{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":278,"completed":246,"skipped":4078,"failed":0}
SSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should deny crd creation [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:50:04.963: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 26 23:50:05.584: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 26 23:50:07.594: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734082605, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734082605, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734082605, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734082605, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 26 23:50:10.632: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should deny crd creation [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the crd webhook via the AdmissionRegistration API
STEP: Creating a custom resource definition that should be denied by the webhook
Aug 26 23:50:10.655: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:50:10.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7377" for this suite.
STEP: Destroying namespace "webhook-7377-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:5.811 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should deny crd creation [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":278,"completed":247,"skipped":4081,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:50:10.773: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a pod. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Pod that fits quota
STEP: Ensuring ResourceQuota status captures the pod usage
STEP: Not allowing a pod to be created that exceeds remaining quota
STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources)
STEP: Ensuring a pod cannot update its resource requirements
STEP: Ensuring attempts to update pod resource requirements did not change quota usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:50:24.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-2849" for this suite.

• [SLOW TEST:13.678 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":278,"completed":248,"skipped":4088,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:50:24.452: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should retry creating failed daemon pods [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Aug 26 23:50:24.753: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 23:50:24.819: INFO: Number of nodes with available pods: 0
Aug 26 23:50:24.819: INFO: Node jerma-worker is running more than one daemon pod
Aug 26 23:50:25.825: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 23:50:25.829: INFO: Number of nodes with available pods: 0
Aug 26 23:50:25.829: INFO: Node jerma-worker is running more than one daemon pod
Aug 26 23:50:26.823: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 23:50:26.826: INFO: Number of nodes with available pods: 0
Aug 26 23:50:26.826: INFO: Node jerma-worker is running more than one daemon pod
Aug 26 23:50:27.837: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 23:50:27.840: INFO: Number of nodes with available pods: 0
Aug 26 23:50:27.840: INFO: Node jerma-worker is running more than one daemon pod
Aug 26 23:50:29.035: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 23:50:29.249: INFO: Number of nodes with available pods: 0
Aug 26 23:50:29.250: INFO: Node jerma-worker is running more than one daemon pod
Aug 26 23:50:29.987: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 23:50:30.033: INFO: Number of nodes with available pods: 1
Aug 26 23:50:30.033: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 26 23:50:30.824: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 23:50:30.827: INFO: Number of nodes with available pods: 2
Aug 26 23:50:30.827: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Aug 26 23:50:31.009: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 23:50:31.027: INFO: Number of nodes with available pods: 2
Aug 26 23:50:31.027: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6397, will wait for the garbage collector to delete the pods
Aug 26 23:50:32.246: INFO: Deleting DaemonSet.extensions daemon-set took: 31.254015ms
Aug 26 23:50:32.646: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.284718ms
Aug 26 23:50:41.751: INFO: Number of nodes with available pods: 0
Aug 26 23:50:41.751: INFO: Number of running nodes: 0, number of available pods: 0
Aug 26 23:50:41.753: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6397/daemonsets","resourceVersion":"4050937"},"items":null}

Aug 26 23:50:41.755: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6397/pods","resourceVersion":"4050937"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:50:41.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-6397" for this suite.

• [SLOW TEST:17.315 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should retry creating failed daemon pods [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":278,"completed":249,"skipped":4115,"failed":0}
SSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:50:41.767: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 26 23:50:46.058: INFO: Waiting up to 5m0s for pod "client-envvars-bfdd712c-2606-43e0-9003-43b2baaa595a" in namespace "pods-403" to be "success or failure"
Aug 26 23:50:46.076: INFO: Pod "client-envvars-bfdd712c-2606-43e0-9003-43b2baaa595a": Phase="Pending", Reason="", readiness=false. Elapsed: 17.624599ms
Aug 26 23:50:48.082: INFO: Pod "client-envvars-bfdd712c-2606-43e0-9003-43b2baaa595a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023866291s
Aug 26 23:50:50.086: INFO: Pod "client-envvars-bfdd712c-2606-43e0-9003-43b2baaa595a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027917765s
STEP: Saw pod success
Aug 26 23:50:50.086: INFO: Pod "client-envvars-bfdd712c-2606-43e0-9003-43b2baaa595a" satisfied condition "success or failure"
Aug 26 23:50:50.089: INFO: Trying to get logs from node jerma-worker2 pod client-envvars-bfdd712c-2606-43e0-9003-43b2baaa595a container env3cont: 
STEP: delete the pod
Aug 26 23:50:50.109: INFO: Waiting for pod client-envvars-bfdd712c-2606-43e0-9003-43b2baaa595a to disappear
Aug 26 23:50:50.133: INFO: Pod client-envvars-bfdd712c-2606-43e0-9003-43b2baaa595a no longer exists
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:50:50.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-403" for this suite.

• [SLOW TEST:8.541 seconds]
[k8s.io] Pods
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should contain environment variables for services [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":278,"completed":250,"skipped":4119,"failed":0}
SSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:50:50.309: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir volume type on tmpfs
Aug 26 23:50:50.487: INFO: Waiting up to 5m0s for pod "pod-a7c69f4e-5eed-47b0-9ff2-5eb1e2406ca5" in namespace "emptydir-4168" to be "success or failure"
Aug 26 23:50:50.615: INFO: Pod "pod-a7c69f4e-5eed-47b0-9ff2-5eb1e2406ca5": Phase="Pending", Reason="", readiness=false. Elapsed: 128.036369ms
Aug 26 23:50:52.620: INFO: Pod "pod-a7c69f4e-5eed-47b0-9ff2-5eb1e2406ca5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.132925762s
Aug 26 23:50:54.681: INFO: Pod "pod-a7c69f4e-5eed-47b0-9ff2-5eb1e2406ca5": Phase="Running", Reason="", readiness=true. Elapsed: 4.193989507s
Aug 26 23:50:56.684: INFO: Pod "pod-a7c69f4e-5eed-47b0-9ff2-5eb1e2406ca5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.197242081s
STEP: Saw pod success
Aug 26 23:50:56.684: INFO: Pod "pod-a7c69f4e-5eed-47b0-9ff2-5eb1e2406ca5" satisfied condition "success or failure"
Aug 26 23:50:56.686: INFO: Trying to get logs from node jerma-worker pod pod-a7c69f4e-5eed-47b0-9ff2-5eb1e2406ca5 container test-container: 
STEP: delete the pod
Aug 26 23:50:56.783: INFO: Waiting for pod pod-a7c69f4e-5eed-47b0-9ff2-5eb1e2406ca5 to disappear
Aug 26 23:50:56.786: INFO: Pod pod-a7c69f4e-5eed-47b0-9ff2-5eb1e2406ca5 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:50:56.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4168" for this suite.

• [SLOW TEST:6.484 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":251,"skipped":4126,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group but different versions [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:50:56.793: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of same group but different versions [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation
Aug 26 23:50:56.847: INFO: >>> kubeConfig: /root/.kube/config
STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation
Aug 26 23:51:07.212: INFO: >>> kubeConfig: /root/.kube/config
Aug 26 23:51:10.094: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:51:19.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-916" for this suite.

• [SLOW TEST:22.751 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group but different versions [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":278,"completed":252,"skipped":4148,"failed":0}
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:51:19.545: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0777 on tmpfs
Aug 26 23:51:19.642: INFO: Waiting up to 5m0s for pod "pod-4e18085c-61ba-47f7-9b05-241bd97a8852" in namespace "emptydir-2004" to be "success or failure"
Aug 26 23:51:19.647: INFO: Pod "pod-4e18085c-61ba-47f7-9b05-241bd97a8852": Phase="Pending", Reason="", readiness=false. Elapsed: 4.586461ms
Aug 26 23:51:21.705: INFO: Pod "pod-4e18085c-61ba-47f7-9b05-241bd97a8852": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062387984s
Aug 26 23:51:23.708: INFO: Pod "pod-4e18085c-61ba-47f7-9b05-241bd97a8852": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.066103049s
STEP: Saw pod success
Aug 26 23:51:23.708: INFO: Pod "pod-4e18085c-61ba-47f7-9b05-241bd97a8852" satisfied condition "success or failure"
Aug 26 23:51:23.711: INFO: Trying to get logs from node jerma-worker pod pod-4e18085c-61ba-47f7-9b05-241bd97a8852 container test-container: 
STEP: delete the pod
Aug 26 23:51:23.791: INFO: Waiting for pod pod-4e18085c-61ba-47f7-9b05-241bd97a8852 to disappear
Aug 26 23:51:23.926: INFO: Pod pod-4e18085c-61ba-47f7-9b05-241bd97a8852 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:51:23.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2004" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":253,"skipped":4148,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:51:23.943: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Aug 26 23:51:23.989: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Aug 26 23:51:24.019: INFO: Waiting for terminating namespaces to be deleted...
Aug 26 23:51:24.021: INFO: 
Logging pods the kubelet thinks is on node jerma-worker before test
Aug 26 23:51:24.024: INFO: kube-proxy-lgd85 from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 26 23:51:24.025: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 26 23:51:24.025: INFO: kindnet-tfrcx from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 26 23:51:24.025: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 26 23:51:24.025: INFO: daemon-set-4l8wc from daemonsets-9371 started at 2020-08-20 20:09:22 +0000 UTC (1 container statuses recorded)
Aug 26 23:51:24.025: INFO: 	Container app ready: true, restart count 0
Aug 26 23:51:24.025: INFO: 
Logging pods the kubelet thinks is on node jerma-worker2 before test
Aug 26 23:51:24.055: INFO: kube-proxy-ckhpn from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 26 23:51:24.055: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 26 23:51:24.055: INFO: test-recreate-deployment-5f94c574ff-k4dkm from deployment-5601 started at 2020-08-23 04:50:56 +0000 UTC (1 container statuses recorded)
Aug 26 23:51:24.055: INFO: 	Container httpd ready: true, restart count 0
Aug 26 23:51:24.055: INFO: daemon-set-cxv46 from daemonsets-9371 started at 2020-08-20 20:09:22 +0000 UTC (1 container statuses recorded)
Aug 26 23:51:24.055: INFO: 	Container app ready: true, restart count 0
Aug 26 23:51:24.055: INFO: kindnet-gxck9 from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 26 23:51:24.055: INFO: 	Container kindnet-cni ready: true, restart count 0
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.162ef6714289ade0], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:51:25.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-731" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77
•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","total":278,"completed":254,"skipped":4190,"failed":0}
SSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:51:25.084: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod test-webserver-77d4f48a-82f3-4b70-b73f-7d2dd971b199 in namespace container-probe-1822
Aug 26 23:51:29.195: INFO: Started pod test-webserver-77d4f48a-82f3-4b70-b73f-7d2dd971b199 in namespace container-probe-1822
STEP: checking the pod's current state and verifying that restartCount is present
Aug 26 23:51:29.200: INFO: Initial restart count of pod test-webserver-77d4f48a-82f3-4b70-b73f-7d2dd971b199 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:55:30.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-1822" for this suite.

• [SLOW TEST:245.459 seconds]
[k8s.io] Probing container
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":255,"skipped":4193,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate configmap [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:55:30.544: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 26 23:55:31.902: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 26 23:55:33.911: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734082931, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734082931, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734082931, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734082931, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 26 23:55:37.543: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate configmap [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the mutating configmap webhook via the AdmissionRegistration API
STEP: create a configmap that should be updated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:55:37.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-6853" for this suite.
STEP: Destroying namespace "webhook-6853-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:8.487 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate configmap [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":278,"completed":256,"skipped":4199,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:55:39.031: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-6406.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-6406.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6406.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-6406.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-6406.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6406.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 26 23:55:48.790: INFO: DNS probes using dns-6406/dns-test-8de40e05-3073-4cbe-8dd1-cf83e8c1f32a succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:55:49.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-6406" for this suite.

• [SLOW TEST:10.318 seconds]
[sig-network] DNS
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":278,"completed":257,"skipped":4222,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny pod and configmap creation [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:55:49.350: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 26 23:55:50.172: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 26 23:55:52.294: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734082950, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734082950, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734082950, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734082950, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 23:55:54.298: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734082950, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734082950, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734082950, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734082950, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 26 23:55:57.353: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny pod and configmap creation [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the webhook via the AdmissionRegistration API
STEP: create a pod that should be denied by the webhook
STEP: create a pod that causes the webhook to hang
STEP: create a configmap that should be denied by the webhook
STEP: create a configmap that should be admitted by the webhook
STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook
STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook
STEP: create a namespace that bypass the webhook
STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:56:07.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-8949" for this suite.
STEP: Destroying namespace "webhook-8949-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:18.514 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny pod and configmap creation [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":278,"completed":258,"skipped":4234,"failed":0}
S
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:56:07.864: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 26 23:56:07.961: INFO: Waiting up to 5m0s for pod "downwardapi-volume-27382560-bae0-4df7-b009-6292eadde843" in namespace "projected-8135" to be "success or failure"
Aug 26 23:56:07.970: INFO: Pod "downwardapi-volume-27382560-bae0-4df7-b009-6292eadde843": Phase="Pending", Reason="", readiness=false. Elapsed: 8.950125ms
Aug 26 23:56:09.974: INFO: Pod "downwardapi-volume-27382560-bae0-4df7-b009-6292eadde843": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013179323s
Aug 26 23:56:11.978: INFO: Pod "downwardapi-volume-27382560-bae0-4df7-b009-6292eadde843": Phase="Running", Reason="", readiness=true. Elapsed: 4.017252676s
Aug 26 23:56:13.982: INFO: Pod "downwardapi-volume-27382560-bae0-4df7-b009-6292eadde843": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.020793694s
STEP: Saw pod success
Aug 26 23:56:13.982: INFO: Pod "downwardapi-volume-27382560-bae0-4df7-b009-6292eadde843" satisfied condition "success or failure"
Aug 26 23:56:13.984: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-27382560-bae0-4df7-b009-6292eadde843 container client-container: 
STEP: delete the pod
Aug 26 23:56:14.012: INFO: Waiting for pod downwardapi-volume-27382560-bae0-4df7-b009-6292eadde843 to disappear
Aug 26 23:56:14.016: INFO: Pod downwardapi-volume-27382560-bae0-4df7-b009-6292eadde843 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:56:14.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8135" for this suite.

• [SLOW TEST:6.159 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":259,"skipped":4235,"failed":0}
SSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:56:14.023: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Aug 26 23:56:22.174: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 26 23:56:22.210: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 26 23:56:24.211: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 26 23:56:24.222: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 26 23:56:26.211: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 26 23:56:26.420: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 26 23:56:28.211: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 26 23:56:28.214: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 26 23:56:30.211: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 26 23:56:30.214: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 26 23:56:32.211: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 26 23:56:32.215: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:56:32.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-4272" for this suite.

• [SLOW TEST:18.215 seconds]
[k8s.io] Container Lifecycle Hook
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when create a pod with lifecycle hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":278,"completed":260,"skipped":4246,"failed":0}
SSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:56:32.237: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 26 23:56:32.360: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Aug 26 23:56:32.372: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 23:56:32.405: INFO: Number of nodes with available pods: 0
Aug 26 23:56:32.405: INFO: Node jerma-worker is running more than one daemon pod
Aug 26 23:56:33.409: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 23:56:33.419: INFO: Number of nodes with available pods: 0
Aug 26 23:56:33.419: INFO: Node jerma-worker is running more than one daemon pod
Aug 26 23:56:34.451: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 23:56:34.540: INFO: Number of nodes with available pods: 0
Aug 26 23:56:34.540: INFO: Node jerma-worker is running more than one daemon pod
Aug 26 23:56:35.454: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 23:56:35.457: INFO: Number of nodes with available pods: 0
Aug 26 23:56:35.457: INFO: Node jerma-worker is running more than one daemon pod
Aug 26 23:56:36.409: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 23:56:36.412: INFO: Number of nodes with available pods: 0
Aug 26 23:56:36.413: INFO: Node jerma-worker is running more than one daemon pod
Aug 26 23:56:37.541: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 23:56:37.582: INFO: Number of nodes with available pods: 1
Aug 26 23:56:37.582: INFO: Node jerma-worker is running more than one daemon pod
Aug 26 23:56:38.409: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 23:56:38.412: INFO: Number of nodes with available pods: 2
Aug 26 23:56:38.412: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Aug 26 23:56:38.510: INFO: Wrong image for pod: daemon-set-pjr52. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 26 23:56:38.510: INFO: Wrong image for pod: daemon-set-sjhlp. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 26 23:56:38.540: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 23:56:39.544: INFO: Wrong image for pod: daemon-set-pjr52. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 26 23:56:39.544: INFO: Wrong image for pod: daemon-set-sjhlp. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 26 23:56:39.547: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 23:56:40.545: INFO: Wrong image for pod: daemon-set-pjr52. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 26 23:56:40.545: INFO: Wrong image for pod: daemon-set-sjhlp. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 26 23:56:40.548: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 23:56:41.544: INFO: Wrong image for pod: daemon-set-pjr52. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 26 23:56:41.544: INFO: Wrong image for pod: daemon-set-sjhlp. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 26 23:56:41.547: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 23:56:42.570: INFO: Wrong image for pod: daemon-set-pjr52. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 26 23:56:42.570: INFO: Wrong image for pod: daemon-set-sjhlp. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 26 23:56:42.570: INFO: Pod daemon-set-sjhlp is not available
Aug 26 23:56:42.574: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 23:56:43.545: INFO: Wrong image for pod: daemon-set-pjr52. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 26 23:56:43.545: INFO: Wrong image for pod: daemon-set-sjhlp. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 26 23:56:43.545: INFO: Pod daemon-set-sjhlp is not available
Aug 26 23:56:43.549: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 23:56:44.545: INFO: Wrong image for pod: daemon-set-pjr52. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 26 23:56:44.545: INFO: Wrong image for pod: daemon-set-sjhlp. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 26 23:56:44.545: INFO: Pod daemon-set-sjhlp is not available
Aug 26 23:56:44.548: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 23:56:45.545: INFO: Wrong image for pod: daemon-set-pjr52. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 26 23:56:45.545: INFO: Wrong image for pod: daemon-set-sjhlp. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 26 23:56:45.545: INFO: Pod daemon-set-sjhlp is not available
Aug 26 23:56:45.549: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 23:56:46.545: INFO: Wrong image for pod: daemon-set-pjr52. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 26 23:56:46.545: INFO: Wrong image for pod: daemon-set-sjhlp. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 26 23:56:46.545: INFO: Pod daemon-set-sjhlp is not available
Aug 26 23:56:46.549: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 23:56:47.545: INFO: Wrong image for pod: daemon-set-pjr52. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 26 23:56:47.545: INFO: Wrong image for pod: daemon-set-sjhlp. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 26 23:56:47.545: INFO: Pod daemon-set-sjhlp is not available
Aug 26 23:56:47.548: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 23:56:48.545: INFO: Wrong image for pod: daemon-set-pjr52. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 26 23:56:48.545: INFO: Wrong image for pod: daemon-set-sjhlp. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 26 23:56:48.545: INFO: Pod daemon-set-sjhlp is not available
Aug 26 23:56:48.549: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 23:56:49.545: INFO: Wrong image for pod: daemon-set-pjr52. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 26 23:56:49.545: INFO: Wrong image for pod: daemon-set-sjhlp. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 26 23:56:49.545: INFO: Pod daemon-set-sjhlp is not available
Aug 26 23:56:49.548: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 23:56:50.544: INFO: Wrong image for pod: daemon-set-pjr52. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 26 23:56:50.544: INFO: Wrong image for pod: daemon-set-sjhlp. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 26 23:56:50.544: INFO: Pod daemon-set-sjhlp is not available
Aug 26 23:56:50.548: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 23:56:51.545: INFO: Wrong image for pod: daemon-set-pjr52. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 26 23:56:51.545: INFO: Wrong image for pod: daemon-set-sjhlp. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 26 23:56:51.545: INFO: Pod daemon-set-sjhlp is not available
Aug 26 23:56:51.548: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 23:56:52.600: INFO: Wrong image for pod: daemon-set-pjr52. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 26 23:56:52.600: INFO: Pod daemon-set-zrbs8 is not available
Aug 26 23:56:52.603: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 23:56:53.545: INFO: Wrong image for pod: daemon-set-pjr52. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 26 23:56:53.545: INFO: Pod daemon-set-zrbs8 is not available
Aug 26 23:56:53.548: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 23:56:54.595: INFO: Wrong image for pod: daemon-set-pjr52. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 26 23:56:54.595: INFO: Pod daemon-set-zrbs8 is not available
Aug 26 23:56:54.613: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 23:56:55.648: INFO: Wrong image for pod: daemon-set-pjr52. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 26 23:56:55.648: INFO: Pod daemon-set-zrbs8 is not available
Aug 26 23:56:55.660: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 23:56:56.600: INFO: Wrong image for pod: daemon-set-pjr52. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 26 23:56:56.603: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 23:56:57.545: INFO: Wrong image for pod: daemon-set-pjr52. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 26 23:56:57.549: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 23:56:58.545: INFO: Wrong image for pod: daemon-set-pjr52. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 26 23:56:58.545: INFO: Pod daemon-set-pjr52 is not available
Aug 26 23:56:58.549: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 23:56:59.545: INFO: Wrong image for pod: daemon-set-pjr52. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 26 23:56:59.545: INFO: Pod daemon-set-pjr52 is not available
Aug 26 23:56:59.548: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 23:57:00.545: INFO: Wrong image for pod: daemon-set-pjr52. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 26 23:57:00.545: INFO: Pod daemon-set-pjr52 is not available
Aug 26 23:57:00.549: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 23:57:01.545: INFO: Wrong image for pod: daemon-set-pjr52. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 26 23:57:01.545: INFO: Pod daemon-set-pjr52 is not available
Aug 26 23:57:01.549: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 23:57:02.545: INFO: Pod daemon-set-kcnwd is not available
Aug 26 23:57:02.549: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
STEP: Check that daemon pods are still running on every node of the cluster.
Aug 26 23:57:02.553: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 23:57:02.556: INFO: Number of nodes with available pods: 1
Aug 26 23:57:02.556: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 26 23:57:03.561: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 23:57:03.564: INFO: Number of nodes with available pods: 1
Aug 26 23:57:03.564: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 26 23:57:04.562: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 23:57:04.566: INFO: Number of nodes with available pods: 1
Aug 26 23:57:04.566: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 26 23:57:05.589: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 23:57:05.596: INFO: Number of nodes with available pods: 2
Aug 26 23:57:05.596: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4403, will wait for the garbage collector to delete the pods
Aug 26 23:57:05.671: INFO: Deleting DaemonSet.extensions daemon-set took: 5.978134ms
Aug 26 23:57:05.971: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.332539ms
Aug 26 23:57:10.274: INFO: Number of nodes with available pods: 0
Aug 26 23:57:10.274: INFO: Number of running nodes: 0, number of available pods: 0
Aug 26 23:57:10.277: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4403/daemonsets","resourceVersion":"4052562"},"items":null}

Aug 26 23:57:10.278: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4403/pods","resourceVersion":"4052562"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:57:10.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-4403" for this suite.

• [SLOW TEST:38.054 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":278,"completed":261,"skipped":4249,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:57:10.292: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1699.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-1699.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1699.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-1699.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1699.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-1699.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1699.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-1699.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1699.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-1699.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1699.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-1699.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1699.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 189.61.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.61.189_udp@PTR;check="$$(dig +tcp +noall +answer +search 189.61.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.61.189_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1699.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-1699.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1699.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-1699.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1699.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-1699.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1699.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-1699.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1699.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-1699.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1699.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-1699.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1699.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 189.61.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.61.189_udp@PTR;check="$$(dig +tcp +noall +answer +search 189.61.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.61.189_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 26 23:57:22.940: INFO: Unable to read wheezy_udp@dns-test-service.dns-1699.svc.cluster.local from pod dns-1699/dns-test-ed1eaf53-d266-4e52-9501-19cbc90649b7: the server could not find the requested resource (get pods dns-test-ed1eaf53-d266-4e52-9501-19cbc90649b7)
Aug 26 23:57:22.942: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1699.svc.cluster.local from pod dns-1699/dns-test-ed1eaf53-d266-4e52-9501-19cbc90649b7: the server could not find the requested resource (get pods dns-test-ed1eaf53-d266-4e52-9501-19cbc90649b7)
Aug 26 23:57:22.946: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1699.svc.cluster.local from pod dns-1699/dns-test-ed1eaf53-d266-4e52-9501-19cbc90649b7: the server could not find the requested resource (get pods dns-test-ed1eaf53-d266-4e52-9501-19cbc90649b7)
Aug 26 23:57:22.948: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1699.svc.cluster.local from pod dns-1699/dns-test-ed1eaf53-d266-4e52-9501-19cbc90649b7: the server could not find the requested resource (get pods dns-test-ed1eaf53-d266-4e52-9501-19cbc90649b7)
Aug 26 23:57:22.965: INFO: Unable to read jessie_udp@dns-test-service.dns-1699.svc.cluster.local from pod dns-1699/dns-test-ed1eaf53-d266-4e52-9501-19cbc90649b7: the server could not find the requested resource (get pods dns-test-ed1eaf53-d266-4e52-9501-19cbc90649b7)
Aug 26 23:57:22.967: INFO: Unable to read jessie_tcp@dns-test-service.dns-1699.svc.cluster.local from pod dns-1699/dns-test-ed1eaf53-d266-4e52-9501-19cbc90649b7: the server could not find the requested resource (get pods dns-test-ed1eaf53-d266-4e52-9501-19cbc90649b7)
Aug 26 23:57:22.989: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1699.svc.cluster.local from pod dns-1699/dns-test-ed1eaf53-d266-4e52-9501-19cbc90649b7: the server could not find the requested resource (get pods dns-test-ed1eaf53-d266-4e52-9501-19cbc90649b7)
Aug 26 23:57:22.992: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1699.svc.cluster.local from pod dns-1699/dns-test-ed1eaf53-d266-4e52-9501-19cbc90649b7: the server could not find the requested resource (get pods dns-test-ed1eaf53-d266-4e52-9501-19cbc90649b7)
Aug 26 23:57:23.008: INFO: Lookups using dns-1699/dns-test-ed1eaf53-d266-4e52-9501-19cbc90649b7 failed for: [wheezy_udp@dns-test-service.dns-1699.svc.cluster.local wheezy_tcp@dns-test-service.dns-1699.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1699.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1699.svc.cluster.local jessie_udp@dns-test-service.dns-1699.svc.cluster.local jessie_tcp@dns-test-service.dns-1699.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1699.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1699.svc.cluster.local]

Aug 26 23:57:28.012: INFO: Unable to read wheezy_udp@dns-test-service.dns-1699.svc.cluster.local from pod dns-1699/dns-test-ed1eaf53-d266-4e52-9501-19cbc90649b7: the server could not find the requested resource (get pods dns-test-ed1eaf53-d266-4e52-9501-19cbc90649b7)
Aug 26 23:57:28.015: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1699.svc.cluster.local from pod dns-1699/dns-test-ed1eaf53-d266-4e52-9501-19cbc90649b7: the server could not find the requested resource (get pods dns-test-ed1eaf53-d266-4e52-9501-19cbc90649b7)
Aug 26 23:57:28.018: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1699.svc.cluster.local from pod dns-1699/dns-test-ed1eaf53-d266-4e52-9501-19cbc90649b7: the server could not find the requested resource (get pods dns-test-ed1eaf53-d266-4e52-9501-19cbc90649b7)
Aug 26 23:57:28.022: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1699.svc.cluster.local from pod dns-1699/dns-test-ed1eaf53-d266-4e52-9501-19cbc90649b7: the server could not find the requested resource (get pods dns-test-ed1eaf53-d266-4e52-9501-19cbc90649b7)
Aug 26 23:57:28.059: INFO: Unable to read jessie_udp@dns-test-service.dns-1699.svc.cluster.local from pod dns-1699/dns-test-ed1eaf53-d266-4e52-9501-19cbc90649b7: the server could not find the requested resource (get pods dns-test-ed1eaf53-d266-4e52-9501-19cbc90649b7)
Aug 26 23:57:28.061: INFO: Unable to read jessie_tcp@dns-test-service.dns-1699.svc.cluster.local from pod dns-1699/dns-test-ed1eaf53-d266-4e52-9501-19cbc90649b7: the server could not find the requested resource (get pods dns-test-ed1eaf53-d266-4e52-9501-19cbc90649b7)
Aug 26 23:57:28.063: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1699.svc.cluster.local from pod dns-1699/dns-test-ed1eaf53-d266-4e52-9501-19cbc90649b7: the server could not find the requested resource (get pods dns-test-ed1eaf53-d266-4e52-9501-19cbc90649b7)
Aug 26 23:57:28.065: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1699.svc.cluster.local from pod dns-1699/dns-test-ed1eaf53-d266-4e52-9501-19cbc90649b7: the server could not find the requested resource (get pods dns-test-ed1eaf53-d266-4e52-9501-19cbc90649b7)
Aug 26 23:57:28.080: INFO: Lookups using dns-1699/dns-test-ed1eaf53-d266-4e52-9501-19cbc90649b7 failed for: [wheezy_udp@dns-test-service.dns-1699.svc.cluster.local wheezy_tcp@dns-test-service.dns-1699.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1699.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1699.svc.cluster.local jessie_udp@dns-test-service.dns-1699.svc.cluster.local jessie_tcp@dns-test-service.dns-1699.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1699.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1699.svc.cluster.local]

Aug 26 23:57:33.013: INFO: Unable to read wheezy_udp@dns-test-service.dns-1699.svc.cluster.local from pod dns-1699/dns-test-ed1eaf53-d266-4e52-9501-19cbc90649b7: the server could not find the requested resource (get pods dns-test-ed1eaf53-d266-4e52-9501-19cbc90649b7)
Aug 26 23:57:33.016: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1699.svc.cluster.local from pod dns-1699/dns-test-ed1eaf53-d266-4e52-9501-19cbc90649b7: the server could not find the requested resource (get pods dns-test-ed1eaf53-d266-4e52-9501-19cbc90649b7)
Aug 26 23:57:33.019: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1699.svc.cluster.local from pod dns-1699/dns-test-ed1eaf53-d266-4e52-9501-19cbc90649b7: the server could not find the requested resource (get pods dns-test-ed1eaf53-d266-4e52-9501-19cbc90649b7)
Aug 26 23:57:33.022: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1699.svc.cluster.local from pod dns-1699/dns-test-ed1eaf53-d266-4e52-9501-19cbc90649b7: the server could not find the requested resource (get pods dns-test-ed1eaf53-d266-4e52-9501-19cbc90649b7)
Aug 26 23:57:33.041: INFO: Unable to read jessie_udp@dns-test-service.dns-1699.svc.cluster.local from pod dns-1699/dns-test-ed1eaf53-d266-4e52-9501-19cbc90649b7: the server could not find the requested resource (get pods dns-test-ed1eaf53-d266-4e52-9501-19cbc90649b7)
Aug 26 23:57:33.043: INFO: Unable to read jessie_tcp@dns-test-service.dns-1699.svc.cluster.local from pod dns-1699/dns-test-ed1eaf53-d266-4e52-9501-19cbc90649b7: the server could not find the requested resource (get pods dns-test-ed1eaf53-d266-4e52-9501-19cbc90649b7)
Aug 26 23:57:33.045: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1699.svc.cluster.local from pod dns-1699/dns-test-ed1eaf53-d266-4e52-9501-19cbc90649b7: the server could not find the requested resource (get pods dns-test-ed1eaf53-d266-4e52-9501-19cbc90649b7)
Aug 26 23:57:33.047: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1699.svc.cluster.local from pod dns-1699/dns-test-ed1eaf53-d266-4e52-9501-19cbc90649b7: the server could not find the requested resource (get pods dns-test-ed1eaf53-d266-4e52-9501-19cbc90649b7)
Aug 26 23:57:33.063: INFO: Lookups using dns-1699/dns-test-ed1eaf53-d266-4e52-9501-19cbc90649b7 failed for: [wheezy_udp@dns-test-service.dns-1699.svc.cluster.local wheezy_tcp@dns-test-service.dns-1699.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1699.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1699.svc.cluster.local jessie_udp@dns-test-service.dns-1699.svc.cluster.local jessie_tcp@dns-test-service.dns-1699.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1699.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1699.svc.cluster.local]

Aug 26 23:57:38.013: INFO: Unable to read wheezy_udp@dns-test-service.dns-1699.svc.cluster.local from pod dns-1699/dns-test-ed1eaf53-d266-4e52-9501-19cbc90649b7: the server could not find the requested resource (get pods dns-test-ed1eaf53-d266-4e52-9501-19cbc90649b7)
Aug 26 23:57:38.017: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1699.svc.cluster.local from pod dns-1699/dns-test-ed1eaf53-d266-4e52-9501-19cbc90649b7: the server could not find the requested resource (get pods dns-test-ed1eaf53-d266-4e52-9501-19cbc90649b7)
Aug 26 23:57:38.020: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1699.svc.cluster.local from pod dns-1699/dns-test-ed1eaf53-d266-4e52-9501-19cbc90649b7: the server could not find the requested resource (get pods dns-test-ed1eaf53-d266-4e52-9501-19cbc90649b7)
Aug 26 23:57:38.023: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1699.svc.cluster.local from pod dns-1699/dns-test-ed1eaf53-d266-4e52-9501-19cbc90649b7: the server could not find the requested resource (get pods dns-test-ed1eaf53-d266-4e52-9501-19cbc90649b7)
Aug 26 23:57:38.045: INFO: Unable to read jessie_udp@dns-test-service.dns-1699.svc.cluster.local from pod dns-1699/dns-test-ed1eaf53-d266-4e52-9501-19cbc90649b7: the server could not find the requested resource (get pods dns-test-ed1eaf53-d266-4e52-9501-19cbc90649b7)
Aug 26 23:57:38.049: INFO: Unable to read jessie_tcp@dns-test-service.dns-1699.svc.cluster.local from pod dns-1699/dns-test-ed1eaf53-d266-4e52-9501-19cbc90649b7: the server could not find the requested resource (get pods dns-test-ed1eaf53-d266-4e52-9501-19cbc90649b7)
Aug 26 23:57:38.051: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1699.svc.cluster.local from pod dns-1699/dns-test-ed1eaf53-d266-4e52-9501-19cbc90649b7: the server could not find the requested resource (get pods dns-test-ed1eaf53-d266-4e52-9501-19cbc90649b7)
Aug 26 23:57:38.055: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1699.svc.cluster.local from pod dns-1699/dns-test-ed1eaf53-d266-4e52-9501-19cbc90649b7: the server could not find the requested resource (get pods dns-test-ed1eaf53-d266-4e52-9501-19cbc90649b7)
Aug 26 23:57:38.075: INFO: Lookups using dns-1699/dns-test-ed1eaf53-d266-4e52-9501-19cbc90649b7 failed for: [wheezy_udp@dns-test-service.dns-1699.svc.cluster.local wheezy_tcp@dns-test-service.dns-1699.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1699.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1699.svc.cluster.local jessie_udp@dns-test-service.dns-1699.svc.cluster.local jessie_tcp@dns-test-service.dns-1699.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1699.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1699.svc.cluster.local]

Aug 26 23:57:43.013: INFO: Unable to read wheezy_udp@dns-test-service.dns-1699.svc.cluster.local from pod dns-1699/dns-test-ed1eaf53-d266-4e52-9501-19cbc90649b7: the server could not find the requested resource (get pods dns-test-ed1eaf53-d266-4e52-9501-19cbc90649b7)
Aug 26 23:57:43.016: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1699.svc.cluster.local from pod dns-1699/dns-test-ed1eaf53-d266-4e52-9501-19cbc90649b7: the server could not find the requested resource (get pods dns-test-ed1eaf53-d266-4e52-9501-19cbc90649b7)
Aug 26 23:57:43.019: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1699.svc.cluster.local from pod dns-1699/dns-test-ed1eaf53-d266-4e52-9501-19cbc90649b7: the server could not find the requested resource (get pods dns-test-ed1eaf53-d266-4e52-9501-19cbc90649b7)
Aug 26 23:57:43.022: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1699.svc.cluster.local from pod dns-1699/dns-test-ed1eaf53-d266-4e52-9501-19cbc90649b7: the server could not find the requested resource (get pods dns-test-ed1eaf53-d266-4e52-9501-19cbc90649b7)
Aug 26 23:57:43.045: INFO: Unable to read jessie_udp@dns-test-service.dns-1699.svc.cluster.local from pod dns-1699/dns-test-ed1eaf53-d266-4e52-9501-19cbc90649b7: the server could not find the requested resource (get pods dns-test-ed1eaf53-d266-4e52-9501-19cbc90649b7)
Aug 26 23:57:43.048: INFO: Unable to read jessie_tcp@dns-test-service.dns-1699.svc.cluster.local from pod dns-1699/dns-test-ed1eaf53-d266-4e52-9501-19cbc90649b7: the server could not find the requested resource (get pods dns-test-ed1eaf53-d266-4e52-9501-19cbc90649b7)
Aug 26 23:57:43.053: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1699.svc.cluster.local from pod dns-1699/dns-test-ed1eaf53-d266-4e52-9501-19cbc90649b7: the server could not find the requested resource (get pods dns-test-ed1eaf53-d266-4e52-9501-19cbc90649b7)
Aug 26 23:57:43.056: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1699.svc.cluster.local from pod dns-1699/dns-test-ed1eaf53-d266-4e52-9501-19cbc90649b7: the server could not find the requested resource (get pods dns-test-ed1eaf53-d266-4e52-9501-19cbc90649b7)
Aug 26 23:57:43.071: INFO: Lookups using dns-1699/dns-test-ed1eaf53-d266-4e52-9501-19cbc90649b7 failed for: [wheezy_udp@dns-test-service.dns-1699.svc.cluster.local wheezy_tcp@dns-test-service.dns-1699.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1699.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1699.svc.cluster.local jessie_udp@dns-test-service.dns-1699.svc.cluster.local jessie_tcp@dns-test-service.dns-1699.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1699.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1699.svc.cluster.local]

Aug 26 23:57:48.013: INFO: Unable to read wheezy_udp@dns-test-service.dns-1699.svc.cluster.local from pod dns-1699/dns-test-ed1eaf53-d266-4e52-9501-19cbc90649b7: the server could not find the requested resource (get pods dns-test-ed1eaf53-d266-4e52-9501-19cbc90649b7)
Aug 26 23:57:48.017: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1699.svc.cluster.local from pod dns-1699/dns-test-ed1eaf53-d266-4e52-9501-19cbc90649b7: the server could not find the requested resource (get pods dns-test-ed1eaf53-d266-4e52-9501-19cbc90649b7)
Aug 26 23:57:48.021: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1699.svc.cluster.local from pod dns-1699/dns-test-ed1eaf53-d266-4e52-9501-19cbc90649b7: the server could not find the requested resource (get pods dns-test-ed1eaf53-d266-4e52-9501-19cbc90649b7)
Aug 26 23:57:48.024: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1699.svc.cluster.local from pod dns-1699/dns-test-ed1eaf53-d266-4e52-9501-19cbc90649b7: the server could not find the requested resource (get pods dns-test-ed1eaf53-d266-4e52-9501-19cbc90649b7)
Aug 26 23:57:48.050: INFO: Unable to read jessie_udp@dns-test-service.dns-1699.svc.cluster.local from pod dns-1699/dns-test-ed1eaf53-d266-4e52-9501-19cbc90649b7: the server could not find the requested resource (get pods dns-test-ed1eaf53-d266-4e52-9501-19cbc90649b7)
Aug 26 23:57:48.052: INFO: Unable to read jessie_tcp@dns-test-service.dns-1699.svc.cluster.local from pod dns-1699/dns-test-ed1eaf53-d266-4e52-9501-19cbc90649b7: the server could not find the requested resource (get pods dns-test-ed1eaf53-d266-4e52-9501-19cbc90649b7)
Aug 26 23:57:48.055: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1699.svc.cluster.local from pod dns-1699/dns-test-ed1eaf53-d266-4e52-9501-19cbc90649b7: the server could not find the requested resource (get pods dns-test-ed1eaf53-d266-4e52-9501-19cbc90649b7)
Aug 26 23:57:48.058: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1699.svc.cluster.local from pod dns-1699/dns-test-ed1eaf53-d266-4e52-9501-19cbc90649b7: the server could not find the requested resource (get pods dns-test-ed1eaf53-d266-4e52-9501-19cbc90649b7)
Aug 26 23:57:48.075: INFO: Lookups using dns-1699/dns-test-ed1eaf53-d266-4e52-9501-19cbc90649b7 failed for: [wheezy_udp@dns-test-service.dns-1699.svc.cluster.local wheezy_tcp@dns-test-service.dns-1699.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1699.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1699.svc.cluster.local jessie_udp@dns-test-service.dns-1699.svc.cluster.local jessie_tcp@dns-test-service.dns-1699.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1699.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1699.svc.cluster.local]

Aug 26 23:57:53.408: INFO: Unable to read wheezy_udp@dns-test-service.dns-1699.svc.cluster.local from pod dns-1699/dns-test-ed1eaf53-d266-4e52-9501-19cbc90649b7: Get https://172.30.12.66:37695/api/v1/namespaces/dns-1699/pods/dns-test-ed1eaf53-d266-4e52-9501-19cbc90649b7/proxy/results/wheezy_udp@dns-test-service.dns-1699.svc.cluster.local: stream error: stream ID 12457; INTERNAL_ERROR
Aug 26 23:57:53.458: INFO: Lookups using dns-1699/dns-test-ed1eaf53-d266-4e52-9501-19cbc90649b7 failed for: [wheezy_udp@dns-test-service.dns-1699.svc.cluster.local]

Aug 26 23:57:58.063: INFO: DNS probes using dns-1699/dns-test-ed1eaf53-d266-4e52-9501-19cbc90649b7 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:57:58.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-1699" for this suite.

• [SLOW TEST:48.639 seconds]
[sig-network] DNS
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for services  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for services  [Conformance]","total":278,"completed":262,"skipped":4291,"failed":0}
SSSSSSSSS
------------------------------
[sig-apps] Job 
  should delete a job [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Job
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:57:58.932: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete a job [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: delete a job
STEP: deleting Job.batch foo in namespace job-2751, will wait for the garbage collector to delete the pods
Aug 26 23:58:05.312: INFO: Deleting Job.batch foo took: 5.933319ms
Aug 26 23:58:05.612: INFO: Terminating Job.batch foo pods took: 300.258582ms
STEP: Ensuring job was deleted
[AfterEach] [sig-apps] Job
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:58:39.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-2751" for this suite.

• [SLOW TEST:40.808 seconds]
[sig-apps] Job
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete a job [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":278,"completed":263,"skipped":4300,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:58:39.741: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0777 on node default medium
Aug 26 23:58:39.802: INFO: Waiting up to 5m0s for pod "pod-766b1bd5-9e8d-4af6-85e7-420f0c57c4f5" in namespace "emptydir-967" to be "success or failure"
Aug 26 23:58:39.805: INFO: Pod "pod-766b1bd5-9e8d-4af6-85e7-420f0c57c4f5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.56048ms
Aug 26 23:58:41.809: INFO: Pod "pod-766b1bd5-9e8d-4af6-85e7-420f0c57c4f5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007657009s
Aug 26 23:58:43.813: INFO: Pod "pod-766b1bd5-9e8d-4af6-85e7-420f0c57c4f5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011540916s
STEP: Saw pod success
Aug 26 23:58:43.813: INFO: Pod "pod-766b1bd5-9e8d-4af6-85e7-420f0c57c4f5" satisfied condition "success or failure"
Aug 26 23:58:43.816: INFO: Trying to get logs from node jerma-worker pod pod-766b1bd5-9e8d-4af6-85e7-420f0c57c4f5 container test-container: 
STEP: delete the pod
Aug 26 23:58:44.035: INFO: Waiting for pod pod-766b1bd5-9e8d-4af6-85e7-420f0c57c4f5 to disappear
Aug 26 23:58:44.056: INFO: Pod pod-766b1bd5-9e8d-4af6-85e7-420f0c57c4f5 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 26 23:58:44.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-967" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":264,"skipped":4315,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 26 23:58:44.063: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod busybox-ba089c9d-8f26-4a3b-95dc-f629e7b614a6 in namespace container-probe-1364
Aug 26 23:58:48.133: INFO: Started pod busybox-ba089c9d-8f26-4a3b-95dc-f629e7b614a6 in namespace container-probe-1364
STEP: checking the pod's current state and verifying that restartCount is present
Aug 26 23:58:48.135: INFO: Initial restart count of pod busybox-ba089c9d-8f26-4a3b-95dc-f629e7b614a6 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 00:02:49.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-1364" for this suite.

• [SLOW TEST:245.648 seconds]
[k8s.io] Probing container
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":265,"skipped":4337,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 00:02:49.712: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Aug 27 00:02:54.323: INFO: Successfully updated pod "pod-update-activedeadlineseconds-c0c0c74c-8c66-4f8f-b504-2fb10e2da5ce"
Aug 27 00:02:54.323: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-c0c0c74c-8c66-4f8f-b504-2fb10e2da5ce" in namespace "pods-9131" to be "terminated due to deadline exceeded"
Aug 27 00:02:54.357: INFO: Pod "pod-update-activedeadlineseconds-c0c0c74c-8c66-4f8f-b504-2fb10e2da5ce": Phase="Running", Reason="", readiness=true. Elapsed: 34.09832ms
Aug 27 00:02:56.360: INFO: Pod "pod-update-activedeadlineseconds-c0c0c74c-8c66-4f8f-b504-2fb10e2da5ce": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.037783889s
Aug 27 00:02:56.361: INFO: Pod "pod-update-activedeadlineseconds-c0c0c74c-8c66-4f8f-b504-2fb10e2da5ce" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 00:02:56.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-9131" for this suite.

• [SLOW TEST:6.657 seconds]
[k8s.io] Pods
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":278,"completed":266,"skipped":4372,"failed":0}
SSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 00:02:56.369: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0777 on tmpfs
Aug 27 00:02:56.481: INFO: Waiting up to 5m0s for pod "pod-12220116-f3b6-4424-8168-4da5fc198636" in namespace "emptydir-7721" to be "success or failure"
Aug 27 00:02:56.503: INFO: Pod "pod-12220116-f3b6-4424-8168-4da5fc198636": Phase="Pending", Reason="", readiness=false. Elapsed: 21.689726ms
Aug 27 00:02:58.543: INFO: Pod "pod-12220116-f3b6-4424-8168-4da5fc198636": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061867995s
Aug 27 00:03:00.547: INFO: Pod "pod-12220116-f3b6-4424-8168-4da5fc198636": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.06559388s
STEP: Saw pod success
Aug 27 00:03:00.547: INFO: Pod "pod-12220116-f3b6-4424-8168-4da5fc198636" satisfied condition "success or failure"
Aug 27 00:03:00.550: INFO: Trying to get logs from node jerma-worker pod pod-12220116-f3b6-4424-8168-4da5fc198636 container test-container: 
STEP: delete the pod
Aug 27 00:03:00.658: INFO: Waiting for pod pod-12220116-f3b6-4424-8168-4da5fc198636 to disappear
Aug 27 00:03:00.670: INFO: Pod pod-12220116-f3b6-4424-8168-4da5fc198636 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 00:03:00.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7721" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":267,"skipped":4377,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 00:03:00.683: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test env composition
Aug 27 00:03:00.791: INFO: Waiting up to 5m0s for pod "var-expansion-c1e79999-32b7-4c98-8c83-1cc2d0e510f3" in namespace "var-expansion-2" to be "success or failure"
Aug 27 00:03:00.808: INFO: Pod "var-expansion-c1e79999-32b7-4c98-8c83-1cc2d0e510f3": Phase="Pending", Reason="", readiness=false. Elapsed: 17.310888ms
Aug 27 00:03:02.812: INFO: Pod "var-expansion-c1e79999-32b7-4c98-8c83-1cc2d0e510f3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020970665s
Aug 27 00:03:04.824: INFO: Pod "var-expansion-c1e79999-32b7-4c98-8c83-1cc2d0e510f3": Phase="Running", Reason="", readiness=true. Elapsed: 4.033554314s
Aug 27 00:03:06.828: INFO: Pod "var-expansion-c1e79999-32b7-4c98-8c83-1cc2d0e510f3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.036849065s
STEP: Saw pod success
Aug 27 00:03:06.828: INFO: Pod "var-expansion-c1e79999-32b7-4c98-8c83-1cc2d0e510f3" satisfied condition "success or failure"
Aug 27 00:03:06.830: INFO: Trying to get logs from node jerma-worker pod var-expansion-c1e79999-32b7-4c98-8c83-1cc2d0e510f3 container dapi-container: 
STEP: delete the pod
Aug 27 00:03:06.879: INFO: Waiting for pod var-expansion-c1e79999-32b7-4c98-8c83-1cc2d0e510f3 to disappear
Aug 27 00:03:06.902: INFO: Pod var-expansion-c1e79999-32b7-4c98-8c83-1cc2d0e510f3 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 00:03:06.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-2" for this suite.

• [SLOW TEST:6.225 seconds]
[k8s.io] Variable Expansion
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":278,"completed":268,"skipped":4408,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 00:03:06.909: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 00:03:14.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-3656" for this suite.
STEP: Destroying namespace "nsdeletetest-5995" for this suite.
Aug 27 00:03:14.929: INFO: Namespace nsdeletetest-5995 was already deleted
STEP: Destroying namespace "nsdeletetest-6815" for this suite.

• [SLOW TEST:8.032 seconds]
[sig-api-machinery] Namespaces [Serial]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":278,"completed":269,"skipped":4441,"failed":0}
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  listing custom resource definition objects works  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 00:03:14.940: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] listing custom resource definition objects works  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 27 00:03:15.177: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 00:03:20.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-779" for this suite.

• [SLOW TEST:5.975 seconds]
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:47
    listing custom resource definition objects works  [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","total":278,"completed":270,"skipped":4441,"failed":0}
SSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert a non homogeneous list of CRs [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 00:03:20.916: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125
STEP: Setting up server cert
STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication
STEP: Deploying the custom resource conversion webhook pod
STEP: Wait for the deployment to be ready
Aug 27 00:03:21.536: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set
Aug 27 00:03:23.562: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734083401, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734083401, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734083401, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734083401, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 27 00:03:25.567: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734083401, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734083401, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734083401, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734083401, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 27 00:03:28.612: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1
[It] should be able to convert a non homogeneous list of CRs [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 27 00:03:28.617: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating a v1 custom resource
STEP: Create a v2 custom resource
STEP: List CRs in v1
STEP: List CRs in v2
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 00:03:29.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-6593" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136

• [SLOW TEST:9.133 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert a non homogeneous list of CRs [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":278,"completed":271,"skipped":4449,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 00:03:30.049: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-302f458e-5952-4d92-906f-8129fe8d06fb
STEP: Creating a pod to test consume configMaps
Aug 27 00:03:30.138: INFO: Waiting up to 5m0s for pod "pod-configmaps-9aabd54d-49b6-447e-a196-3286ed30fd8c" in namespace "configmap-2742" to be "success or failure"
Aug 27 00:03:30.141: INFO: Pod "pod-configmaps-9aabd54d-49b6-447e-a196-3286ed30fd8c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.617513ms
Aug 27 00:03:32.145: INFO: Pod "pod-configmaps-9aabd54d-49b6-447e-a196-3286ed30fd8c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007253902s
Aug 27 00:03:34.150: INFO: Pod "pod-configmaps-9aabd54d-49b6-447e-a196-3286ed30fd8c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012125526s
Aug 27 00:03:36.244: INFO: Pod "pod-configmaps-9aabd54d-49b6-447e-a196-3286ed30fd8c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.106366561s
STEP: Saw pod success
Aug 27 00:03:36.244: INFO: Pod "pod-configmaps-9aabd54d-49b6-447e-a196-3286ed30fd8c" satisfied condition "success or failure"
Aug 27 00:03:36.349: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-9aabd54d-49b6-447e-a196-3286ed30fd8c container configmap-volume-test: 
STEP: delete the pod
Aug 27 00:03:36.446: INFO: Waiting for pod pod-configmaps-9aabd54d-49b6-447e-a196-3286ed30fd8c to disappear
Aug 27 00:03:36.579: INFO: Pod pod-configmaps-9aabd54d-49b6-447e-a196-3286ed30fd8c no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 00:03:36.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2742" for this suite.

• [SLOW TEST:6.538 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":272,"skipped":4468,"failed":0}
SSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to NodePort [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 00:03:36.588: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to change the type from ExternalName to NodePort [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a service externalname-service with the type=ExternalName in namespace services-3345
STEP: changing the ExternalName service to type=NodePort
STEP: creating replication controller externalname-service in namespace services-3345
I0827 00:03:36.793464       6 runners.go:189] Created replication controller with name: externalname-service, namespace: services-3345, replica count: 2
I0827 00:03:39.843879       6 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0827 00:03:42.844179       6 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Aug 27 00:03:42.844: INFO: Creating new exec pod
Aug 27 00:03:47.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3345 execpodcrnpx -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Aug 27 00:03:51.146: INFO: stderr: "I0827 00:03:51.021106    3903 log.go:172] (0xc0008ea000) (0xc0005f85a0) Create stream\nI0827 00:03:51.021146    3903 log.go:172] (0xc0008ea000) (0xc0005f85a0) Stream added, broadcasting: 1\nI0827 00:03:51.023797    3903 log.go:172] (0xc0008ea000) Reply frame received for 1\nI0827 00:03:51.023825    3903 log.go:172] (0xc0008ea000) (0xc000904140) Create stream\nI0827 00:03:51.023832    3903 log.go:172] (0xc0008ea000) (0xc000904140) Stream added, broadcasting: 3\nI0827 00:03:51.024588    3903 log.go:172] (0xc0008ea000) Reply frame received for 3\nI0827 00:03:51.024608    3903 log.go:172] (0xc0008ea000) (0xc0006b5cc0) Create stream\nI0827 00:03:51.024614    3903 log.go:172] (0xc0008ea000) (0xc0006b5cc0) Stream added, broadcasting: 5\nI0827 00:03:51.025630    3903 log.go:172] (0xc0008ea000) Reply frame received for 5\nI0827 00:03:51.119821    3903 log.go:172] (0xc0008ea000) Data frame received for 5\nI0827 00:03:51.119859    3903 log.go:172] (0xc0006b5cc0) (5) Data frame handling\nI0827 00:03:51.119884    3903 log.go:172] (0xc0006b5cc0) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0827 00:03:51.120198    3903 log.go:172] (0xc0008ea000) Data frame received for 5\nI0827 00:03:51.120235    3903 log.go:172] (0xc0006b5cc0) (5) Data frame handling\nI0827 00:03:51.120263    3903 log.go:172] (0xc0006b5cc0) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0827 00:03:51.120339    3903 log.go:172] (0xc0008ea000) Data frame received for 5\nI0827 00:03:51.120354    3903 log.go:172] (0xc0006b5cc0) (5) Data frame handling\nI0827 00:03:51.120851    3903 log.go:172] (0xc0008ea000) Data frame received for 3\nI0827 00:03:51.120884    3903 log.go:172] (0xc000904140) (3) Data frame handling\nI0827 00:03:51.129686    3903 log.go:172] (0xc0008ea000) Data frame received for 1\nI0827 00:03:51.129708    3903 log.go:172] (0xc0005f85a0) (1) Data frame handling\nI0827 00:03:51.129742    3903 log.go:172] (0xc0005f85a0) (1) Data frame sent\nI0827 00:03:51.130514    3903 log.go:172] (0xc0008ea000) (0xc0005f85a0) Stream removed, broadcasting: 1\nI0827 00:03:51.130988    3903 log.go:172] (0xc0008ea000) (0xc0005f85a0) Stream removed, broadcasting: 1\nI0827 00:03:51.131009    3903 log.go:172] (0xc0008ea000) (0xc000904140) Stream removed, broadcasting: 3\nI0827 00:03:51.131168    3903 log.go:172] (0xc0008ea000) (0xc0006b5cc0) Stream removed, broadcasting: 5\n"
Aug 27 00:03:51.146: INFO: stdout: ""
Aug 27 00:03:51.147: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3345 execpodcrnpx -- /bin/sh -x -c nc -zv -t -w 2 10.106.9.102 80'
Aug 27 00:03:51.370: INFO: stderr: "I0827 00:03:51.281804    3936 log.go:172] (0xc0006f0790) (0xc0006b6140) Create stream\nI0827 00:03:51.281868    3936 log.go:172] (0xc0006f0790) (0xc0006b6140) Stream added, broadcasting: 1\nI0827 00:03:51.285072    3936 log.go:172] (0xc0006f0790) Reply frame received for 1\nI0827 00:03:51.285109    3936 log.go:172] (0xc0006f0790) (0xc0005cb9a0) Create stream\nI0827 00:03:51.285121    3936 log.go:172] (0xc0006f0790) (0xc0005cb9a0) Stream added, broadcasting: 3\nI0827 00:03:51.286211    3936 log.go:172] (0xc0006f0790) Reply frame received for 3\nI0827 00:03:51.286243    3936 log.go:172] (0xc0006f0790) (0xc0005585a0) Create stream\nI0827 00:03:51.286254    3936 log.go:172] (0xc0006f0790) (0xc0005585a0) Stream added, broadcasting: 5\nI0827 00:03:51.287065    3936 log.go:172] (0xc0006f0790) Reply frame received for 5\nI0827 00:03:51.359816    3936 log.go:172] (0xc0006f0790) Data frame received for 5\nI0827 00:03:51.359842    3936 log.go:172] (0xc0005585a0) (5) Data frame handling\nI0827 00:03:51.359849    3936 log.go:172] (0xc0005585a0) (5) Data frame sent\nI0827 00:03:51.359854    3936 log.go:172] (0xc0006f0790) Data frame received for 5\nI0827 00:03:51.359859    3936 log.go:172] (0xc0005585a0) (5) Data frame handling\n+ nc -zv -t -w 2 10.106.9.102 80\nConnection to 10.106.9.102 80 port [tcp/http] succeeded!\nI0827 00:03:51.359876    3936 log.go:172] (0xc0006f0790) Data frame received for 3\nI0827 00:03:51.359881    3936 log.go:172] (0xc0005cb9a0) (3) Data frame handling\nI0827 00:03:51.361150    3936 log.go:172] (0xc0006f0790) Data frame received for 1\nI0827 00:03:51.361172    3936 log.go:172] (0xc0006b6140) (1) Data frame handling\nI0827 00:03:51.361187    3936 log.go:172] (0xc0006b6140) (1) Data frame sent\nI0827 00:03:51.361205    3936 log.go:172] (0xc0006f0790) (0xc0006b6140) Stream removed, broadcasting: 1\nI0827 00:03:51.361226    3936 log.go:172] (0xc0006f0790) Go away received\nI0827 00:03:51.361559    3936 log.go:172] (0xc0006f0790) (0xc0006b6140) Stream removed, broadcasting: 1\nI0827 00:03:51.361574    3936 log.go:172] (0xc0006f0790) (0xc0005cb9a0) Stream removed, broadcasting: 3\nI0827 00:03:51.361581    3936 log.go:172] (0xc0006f0790) (0xc0005585a0) Stream removed, broadcasting: 5\n"
Aug 27 00:03:51.370: INFO: stdout: ""
Aug 27 00:03:51.370: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3345 execpodcrnpx -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.6 31190'
Aug 27 00:03:51.560: INFO: stderr: "I0827 00:03:51.488215    3959 log.go:172] (0xc00002ad10) (0xc000681e00) Create stream\nI0827 00:03:51.488263    3959 log.go:172] (0xc00002ad10) (0xc000681e00) Stream added, broadcasting: 1\nI0827 00:03:51.490858    3959 log.go:172] (0xc00002ad10) Reply frame received for 1\nI0827 00:03:51.490899    3959 log.go:172] (0xc00002ad10) (0xc0006586e0) Create stream\nI0827 00:03:51.490918    3959 log.go:172] (0xc00002ad10) (0xc0006586e0) Stream added, broadcasting: 3\nI0827 00:03:51.491693    3959 log.go:172] (0xc00002ad10) Reply frame received for 3\nI0827 00:03:51.491716    3959 log.go:172] (0xc00002ad10) (0xc000681ea0) Create stream\nI0827 00:03:51.491723    3959 log.go:172] (0xc00002ad10) (0xc000681ea0) Stream added, broadcasting: 5\nI0827 00:03:51.492624    3959 log.go:172] (0xc00002ad10) Reply frame received for 5\nI0827 00:03:51.549044    3959 log.go:172] (0xc00002ad10) Data frame received for 5\nI0827 00:03:51.549071    3959 log.go:172] (0xc000681ea0) (5) Data frame handling\nI0827 00:03:51.549109    3959 log.go:172] (0xc000681ea0) (5) Data frame sent\n+ nc -zv -t -w 2 172.18.0.6 31190\nConnection to 172.18.0.6 31190 port [tcp/31190] succeeded!\nI0827 00:03:51.549307    3959 log.go:172] (0xc00002ad10) Data frame received for 5\nI0827 00:03:51.549334    3959 log.go:172] (0xc000681ea0) (5) Data frame handling\nI0827 00:03:51.549354    3959 log.go:172] (0xc00002ad10) Data frame received for 3\nI0827 00:03:51.549363    3959 log.go:172] (0xc0006586e0) (3) Data frame handling\nI0827 00:03:51.550950    3959 log.go:172] (0xc00002ad10) Data frame received for 1\nI0827 00:03:51.550970    3959 log.go:172] (0xc000681e00) (1) Data frame handling\nI0827 00:03:51.551002    3959 log.go:172] (0xc000681e00) (1) Data frame sent\nI0827 00:03:51.551027    3959 log.go:172] (0xc00002ad10) (0xc000681e00) Stream removed, broadcasting: 1\nI0827 00:03:51.551107    3959 log.go:172] (0xc00002ad10) Go away received\nI0827 00:03:51.551347    3959 log.go:172] (0xc00002ad10) (0xc000681e00) Stream removed, broadcasting: 1\nI0827 00:03:51.551368    3959 log.go:172] (0xc00002ad10) (0xc0006586e0) Stream removed, broadcasting: 3\nI0827 00:03:51.551376    3959 log.go:172] (0xc00002ad10) (0xc000681ea0) Stream removed, broadcasting: 5\n"
Aug 27 00:03:51.560: INFO: stdout: ""
Aug 27 00:03:51.560: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3345 execpodcrnpx -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.3 31190'
Aug 27 00:03:51.763: INFO: stderr: "I0827 00:03:51.690135    3980 log.go:172] (0xc0000f6fd0) (0xc0006f3d60) Create stream\nI0827 00:03:51.690181    3980 log.go:172] (0xc0000f6fd0) (0xc0006f3d60) Stream added, broadcasting: 1\nI0827 00:03:51.692002    3980 log.go:172] (0xc0000f6fd0) Reply frame received for 1\nI0827 00:03:51.692024    3980 log.go:172] (0xc0000f6fd0) (0xc000690640) Create stream\nI0827 00:03:51.692031    3980 log.go:172] (0xc0000f6fd0) (0xc000690640) Stream added, broadcasting: 3\nI0827 00:03:51.692856    3980 log.go:172] (0xc0000f6fd0) Reply frame received for 3\nI0827 00:03:51.692876    3980 log.go:172] (0xc0000f6fd0) (0xc0004d9400) Create stream\nI0827 00:03:51.692882    3980 log.go:172] (0xc0000f6fd0) (0xc0004d9400) Stream added, broadcasting: 5\nI0827 00:03:51.693489    3980 log.go:172] (0xc0000f6fd0) Reply frame received for 5\nI0827 00:03:51.753310    3980 log.go:172] (0xc0000f6fd0) Data frame received for 5\nI0827 00:03:51.753332    3980 log.go:172] (0xc0004d9400) (5) Data frame handling\nI0827 00:03:51.753343    3980 log.go:172] (0xc0004d9400) (5) Data frame sent\n+ nc -zv -t -w 2 172.18.0.3 31190\nConnection to 172.18.0.3 31190 port [tcp/31190] succeeded!\nI0827 00:03:51.753754    3980 log.go:172] (0xc0000f6fd0) Data frame received for 5\nI0827 00:03:51.753770    3980 log.go:172] (0xc0004d9400) (5) Data frame handling\nI0827 00:03:51.753800    3980 log.go:172] (0xc0000f6fd0) Data frame received for 3\nI0827 00:03:51.753821    3980 log.go:172] (0xc000690640) (3) Data frame handling\nI0827 00:03:51.754503    3980 log.go:172] (0xc0000f6fd0) Data frame received for 1\nI0827 00:03:51.754512    3980 log.go:172] (0xc0006f3d60) (1) Data frame handling\nI0827 00:03:51.754518    3980 log.go:172] (0xc0006f3d60) (1) Data frame sent\nI0827 00:03:51.754640    3980 log.go:172] (0xc0000f6fd0) (0xc0006f3d60) Stream removed, broadcasting: 1\nI0827 00:03:51.754703    3980 log.go:172] (0xc0000f6fd0) Go away received\nI0827 00:03:51.754873    3980 log.go:172] (0xc0000f6fd0) (0xc0006f3d60) Stream removed, broadcasting: 1\nI0827 00:03:51.754882    3980 log.go:172] (0xc0000f6fd0) (0xc000690640) Stream removed, broadcasting: 3\nI0827 00:03:51.754887    3980 log.go:172] (0xc0000f6fd0) (0xc0004d9400) Stream removed, broadcasting: 5\n"
Aug 27 00:03:51.763: INFO: stdout: ""
Aug 27 00:03:51.763: INFO: Cleaning up the ExternalName to NodePort test service
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 00:03:52.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-3345" for this suite.
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:15.797 seconds]
[sig-network] Services
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to NodePort [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":278,"completed":273,"skipped":4478,"failed":0}
SSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 00:03:52.385: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 00:04:26.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-2202" for this suite.

• [SLOW TEST:33.712 seconds]
[k8s.io] Container Runtime
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  blackbox test
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    when starting a container that exits
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39
      should run with the expected status [NodeConformance] [Conformance]
      /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":278,"completed":274,"skipped":4484,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 00:04:26.097: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 27 00:04:26.147: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1237ab8f-6c49-4dde-b276-b01371135c4f" in namespace "downward-api-7784" to be "success or failure"
Aug 27 00:04:26.202: INFO: Pod "downwardapi-volume-1237ab8f-6c49-4dde-b276-b01371135c4f": Phase="Pending", Reason="", readiness=false. Elapsed: 55.475464ms
Aug 27 00:04:28.220: INFO: Pod "downwardapi-volume-1237ab8f-6c49-4dde-b276-b01371135c4f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073358695s
Aug 27 00:04:30.232: INFO: Pod "downwardapi-volume-1237ab8f-6c49-4dde-b276-b01371135c4f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.085430386s
STEP: Saw pod success
Aug 27 00:04:30.232: INFO: Pod "downwardapi-volume-1237ab8f-6c49-4dde-b276-b01371135c4f" satisfied condition "success or failure"
Aug 27 00:04:30.235: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-1237ab8f-6c49-4dde-b276-b01371135c4f container client-container: 
STEP: delete the pod
Aug 27 00:04:30.269: INFO: Waiting for pod downwardapi-volume-1237ab8f-6c49-4dde-b276-b01371135c4f to disappear
Aug 27 00:04:30.275: INFO: Pod downwardapi-volume-1237ab8f-6c49-4dde-b276-b01371135c4f no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 00:04:30.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7784" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":275,"skipped":4494,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 00:04:30.282: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 27 00:04:30.360: INFO: Waiting up to 5m0s for pod "downwardapi-volume-15b134e5-5940-4940-81de-b8bd2c9abb5e" in namespace "downward-api-6480" to be "success or failure"
Aug 27 00:04:30.365: INFO: Pod "downwardapi-volume-15b134e5-5940-4940-81de-b8bd2c9abb5e": Phase="Pending", Reason="", readiness=false. Elapsed: 5.068632ms
Aug 27 00:04:32.400: INFO: Pod "downwardapi-volume-15b134e5-5940-4940-81de-b8bd2c9abb5e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039584482s
Aug 27 00:04:34.404: INFO: Pod "downwardapi-volume-15b134e5-5940-4940-81de-b8bd2c9abb5e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.043530505s
STEP: Saw pod success
Aug 27 00:04:34.404: INFO: Pod "downwardapi-volume-15b134e5-5940-4940-81de-b8bd2c9abb5e" satisfied condition "success or failure"
Aug 27 00:04:34.407: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-15b134e5-5940-4940-81de-b8bd2c9abb5e container client-container: 
STEP: delete the pod
Aug 27 00:04:34.444: INFO: Waiting for pod downwardapi-volume-15b134e5-5940-4940-81de-b8bd2c9abb5e to disappear
Aug 27 00:04:34.457: INFO: Pod downwardapi-volume-15b134e5-5940-4940-81de-b8bd2c9abb5e no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 00:04:34.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6480" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":276,"skipped":4510,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 00:04:34.463: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should be submitted and removed [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
Aug 27 00:04:34.737: INFO: observed the pod list
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
Aug 27 00:04:43.904: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 00:04:43.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-1647" for this suite.

• [SLOW TEST:9.452 seconds]
[k8s.io] Pods
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be submitted and removed [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":278,"completed":277,"skipped":4532,"failed":0}
SSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 00:04:43.916: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Aug 27 00:04:44.001: INFO: Waiting up to 5m0s for pod "downward-api-2f1cb85a-3047-4a42-8f47-811fa45eba1b" in namespace "downward-api-3034" to be "success or failure"
Aug 27 00:04:44.018: INFO: Pod "downward-api-2f1cb85a-3047-4a42-8f47-811fa45eba1b": Phase="Pending", Reason="", readiness=false. Elapsed: 17.50596ms
Aug 27 00:04:46.022: INFO: Pod "downward-api-2f1cb85a-3047-4a42-8f47-811fa45eba1b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021527107s
Aug 27 00:04:48.026: INFO: Pod "downward-api-2f1cb85a-3047-4a42-8f47-811fa45eba1b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025373251s
STEP: Saw pod success
Aug 27 00:04:48.026: INFO: Pod "downward-api-2f1cb85a-3047-4a42-8f47-811fa45eba1b" satisfied condition "success or failure"
Aug 27 00:04:48.029: INFO: Trying to get logs from node jerma-worker pod downward-api-2f1cb85a-3047-4a42-8f47-811fa45eba1b container dapi-container: 
STEP: delete the pod
Aug 27 00:04:48.046: INFO: Waiting for pod downward-api-2f1cb85a-3047-4a42-8f47-811fa45eba1b to disappear
Aug 27 00:04:48.050: INFO: Pod downward-api-2f1cb85a-3047-4a42-8f47-811fa45eba1b no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 00:04:48.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3034" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":278,"completed":278,"skipped":4540,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSAug 27 00:04:48.079: INFO: Running AfterSuite actions on all nodes
Aug 27 00:04:48.079: INFO: Running AfterSuite actions on node 1
Aug 27 00:04:48.079: INFO: Skipping dumping logs from cluster
{"msg":"Test Suite completed","total":278,"completed":278,"skipped":4566,"failed":0}

Ran 278 of 4844 Specs in 4803.286 seconds
SUCCESS! -- 278 Passed | 0 Failed | 0 Pending | 4566 Skipped
PASS