I1006 20:03:30.693795 7 test_context.go:419] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I1006 20:03:30.699339 7 e2e.go:109] Starting e2e run "c6e194a1-7168-4703-bc60-030734409460" on Ginkgo node 1 {"msg":"Test Suite starting","total":278,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1602014598 - Will randomize all specs Will run 278 of 4845 specs Oct 6 20:03:31.281: INFO: >>> kubeConfig: /root/.kube/config Oct 6 20:03:31.333: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Oct 6 20:03:31.500: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Oct 6 20:03:31.677: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Oct 6 20:03:31.677: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Oct 6 20:03:31.677: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Oct 6 20:03:31.720: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Oct 6 20:03:31.720: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Oct 6 20:03:31.720: INFO: e2e test version: v1.17.12 Oct 6 20:03:31.726: INFO: kube-apiserver version: v1.17.5 Oct 6 20:03:31.729: INFO: >>> kubeConfig: /root/.kube/config Oct 6 20:03:31.763: INFO: Cluster IP family: ipv4 SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Oct 6 20:03:31.768: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir Oct 6 20:03:31.860: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on node default medium Oct 6 20:03:31.893: INFO: Waiting up to 5m0s for pod "pod-18740d20-abec-4914-bd70-076c5f9e451f" in namespace "emptydir-7429" to be "success or failure" Oct 6 20:03:31.904: INFO: Pod "pod-18740d20-abec-4914-bd70-076c5f9e451f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.712183ms Oct 6 20:03:33.914: INFO: Pod "pod-18740d20-abec-4914-bd70-076c5f9e451f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02065586s Oct 6 20:03:35.924: INFO: Pod "pod-18740d20-abec-4914-bd70-076c5f9e451f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030416251s STEP: Saw pod success Oct 6 20:03:35.924: INFO: Pod "pod-18740d20-abec-4914-bd70-076c5f9e451f" satisfied condition "success or failure" Oct 6 20:03:35.929: INFO: Trying to get logs from node jerma-worker2 pod pod-18740d20-abec-4914-bd70-076c5f9e451f container test-container: STEP: delete the pod Oct 6 20:03:36.027: INFO: Waiting for pod pod-18740d20-abec-4914-bd70-076c5f9e451f to disappear Oct 6 20:03:36.040: INFO: Pod pod-18740d20-abec-4914-bd70-076c5f9e451f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Oct 6 20:03:36.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7429" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":1,"skipped":5,"failed":0} SSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Oct 6 20:03:36.074: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Oct 6 20:03:36.156: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7dc2b1b4-cd55-4009-b9a5-0557b9bdc54a" in namespace "downward-api-4863" to be "success or failure" Oct 6 20:03:36.318: INFO: Pod "downwardapi-volume-7dc2b1b4-cd55-4009-b9a5-0557b9bdc54a": Phase="Pending", Reason="", readiness=false. Elapsed: 161.799075ms Oct 6 20:03:38.396: INFO: Pod "downwardapi-volume-7dc2b1b4-cd55-4009-b9a5-0557b9bdc54a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.239068376s Oct 6 20:03:40.426: INFO: Pod "downwardapi-volume-7dc2b1b4-cd55-4009-b9a5-0557b9bdc54a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.269136228s STEP: Saw pod success Oct 6 20:03:40.426: INFO: Pod "downwardapi-volume-7dc2b1b4-cd55-4009-b9a5-0557b9bdc54a" satisfied condition "success or failure" Oct 6 20:03:40.431: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-7dc2b1b4-cd55-4009-b9a5-0557b9bdc54a container client-container: STEP: delete the pod Oct 6 20:03:40.480: INFO: Waiting for pod downwardapi-volume-7dc2b1b4-cd55-4009-b9a5-0557b9bdc54a to disappear Oct 6 20:03:40.490: INFO: Pod downwardapi-volume-7dc2b1b4-cd55-4009-b9a5-0557b9bdc54a no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Oct 6 20:03:40.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4863" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":2,"skipped":8,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Oct 6 20:03:40.506: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Oct 6 20:03:46.632: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-2507 PodName:pod-sharedvolume-9fa97bdc-437f-463e-b570-1f366a54aa88 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 6 20:03:46.634: INFO: >>> kubeConfig: /root/.kube/config I1006 20:03:46.717274 7 log.go:172] (0x4002c069a0) (0x400248c280) Create stream I1006 20:03:46.718255 7 log.go:172] (0x4002c069a0) (0x400248c280) Stream added, broadcasting: 1 I1006 20:03:46.740236 7 log.go:172] (0x4002c069a0) Reply frame received for 1 I1006 20:03:46.741857 7 log.go:172] (0x4002c069a0) (0x400248c320) Create stream I1006 20:03:46.742026 7 log.go:172] (0x4002c069a0) (0x400248c320) Stream added, broadcasting: 3 I1006 20:03:46.744695 7 log.go:172] (0x4002c069a0) Reply frame received for 3 I1006 20:03:46.745013 7 log.go:172] (0x4002c069a0) (0x4002518000) Create stream I1006 20:03:46.745087 7 log.go:172] (0x4002c069a0) (0x4002518000) Stream added, broadcasting: 5 I1006 20:03:46.746510 7 log.go:172] (0x4002c069a0) Reply frame received for 5 I1006 20:03:46.833897 7 log.go:172] (0x4002c069a0) Data frame received for 3 I1006 20:03:46.834538 7 log.go:172] (0x4002c069a0) Data frame received for 5 I1006 20:03:46.834756 7 log.go:172] (0x4002518000) (5) Data frame handling I1006 20:03:46.834847 7 log.go:172] (0x400248c320) (3) Data frame handling I1006 20:03:46.835036 7 log.go:172] (0x4002c069a0) Data frame received for 1 I1006 20:03:46.835138 7 log.go:172] (0x400248c280) (1) Data frame handling I1006 20:03:46.838217 7 log.go:172] (0x400248c320) (3) Data frame sent I1006 20:03:46.838515 7 log.go:172] (0x4002c069a0) Data frame received for 3 I1006 20:03:46.838647 7 log.go:172] (0x400248c320) (3) Data frame handling I1006 20:03:46.838717 7 log.go:172] (0x400248c280) (1) Data frame sent I1006 20:03:46.840026 7 log.go:172] (0x4002c069a0) (0x400248c280) Stream removed, broadcasting: 1 I1006 20:03:46.840597 7 log.go:172] (0x4002c069a0) Go away received I1006 20:03:46.844045 7 log.go:172] (0x4002c069a0) (0x400248c280) Stream removed, broadcasting: 1 I1006 20:03:46.844392 7 log.go:172] (0x4002c069a0) (0x400248c320) Stream removed, broadcasting: 3 I1006 20:03:46.844627 7 log.go:172] (0x4002c069a0) (0x4002518000) Stream removed, broadcasting: 5 Oct 6 20:03:46.845: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Oct 6 20:03:46.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2507" for this suite. • [SLOW TEST:6.351 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 pod should support shared volumes between containers [Conformance] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":278,"completed":3,"skipped":36,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Oct 6 20:03:46.862: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-downwardapi-kcvz STEP: Creating a pod to test atomic-volume-subpath Oct 6 20:03:47.009: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-kcvz" in namespace "subpath-4790" to be "success or failure" Oct 6 20:03:47.072: INFO: Pod "pod-subpath-test-downwardapi-kcvz": Phase="Pending", Reason="", readiness=false. Elapsed: 63.319727ms Oct 6 20:03:49.079: INFO: Pod "pod-subpath-test-downwardapi-kcvz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070341552s Oct 6 20:03:51.206: INFO: Pod "pod-subpath-test-downwardapi-kcvz": Phase="Running", Reason="", readiness=true. Elapsed: 4.197268743s Oct 6 20:03:53.213: INFO: Pod "pod-subpath-test-downwardapi-kcvz": Phase="Running", Reason="", readiness=true. Elapsed: 6.20400034s Oct 6 20:03:55.220: INFO: Pod "pod-subpath-test-downwardapi-kcvz": Phase="Running", Reason="", readiness=true. Elapsed: 8.21093768s Oct 6 20:03:57.226: INFO: Pod "pod-subpath-test-downwardapi-kcvz": Phase="Running", Reason="", readiness=true. Elapsed: 10.217211076s Oct 6 20:03:59.234: INFO: Pod "pod-subpath-test-downwardapi-kcvz": Phase="Running", Reason="", readiness=true. Elapsed: 12.225277187s Oct 6 20:04:01.241: INFO: Pod "pod-subpath-test-downwardapi-kcvz": Phase="Running", Reason="", readiness=true. Elapsed: 14.232028138s Oct 6 20:04:03.248: INFO: Pod "pod-subpath-test-downwardapi-kcvz": Phase="Running", Reason="", readiness=true. Elapsed: 16.238918976s Oct 6 20:04:05.256: INFO: Pod "pod-subpath-test-downwardapi-kcvz": Phase="Running", Reason="", readiness=true. Elapsed: 18.246870829s Oct 6 20:04:07.263: INFO: Pod "pod-subpath-test-downwardapi-kcvz": Phase="Running", Reason="", readiness=true. Elapsed: 20.254355639s Oct 6 20:04:09.271: INFO: Pod "pod-subpath-test-downwardapi-kcvz": Phase="Running", Reason="", readiness=true. Elapsed: 22.262117836s Oct 6 20:04:11.278: INFO: Pod "pod-subpath-test-downwardapi-kcvz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.269163449s STEP: Saw pod success Oct 6 20:04:11.278: INFO: Pod "pod-subpath-test-downwardapi-kcvz" satisfied condition "success or failure" Oct 6 20:04:11.283: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-downwardapi-kcvz container test-container-subpath-downwardapi-kcvz: STEP: delete the pod Oct 6 20:04:11.306: INFO: Waiting for pod pod-subpath-test-downwardapi-kcvz to disappear Oct 6 20:04:11.348: INFO: Pod pod-subpath-test-downwardapi-kcvz no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-kcvz Oct 6 20:04:11.348: INFO: Deleting pod "pod-subpath-test-downwardapi-kcvz" in namespace "subpath-4790" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Oct 6 20:04:11.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4790" for this suite. • [SLOW TEST:24.505 seconds] [sig-storage] Subpath /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":278,"completed":4,"skipped":52,"failed":0} SS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Oct 6 20:04:11.368: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Oct 6 20:04:11.519: INFO: >>> kubeConfig: /root/.kube/config Oct 6 20:04:30.089: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Oct 6 20:05:37.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3584" for this suite. • [SLOW TEST:85.727 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":278,"completed":5,"skipped":54,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Oct 6 20:05:37.097: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Oct 6 20:05:37.183: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Oct 6 20:05:38.256: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Oct 6 20:05:38.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-3535" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":278,"completed":6,"skipped":69,"failed":0} ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Oct 6 20:05:38.361: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W1006 20:05:48.517297 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Oct 6 20:05:48.518: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Oct 6 20:05:48.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4303" for this suite. • [SLOW TEST:10.170 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":278,"completed":7,"skipped":69,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Oct 6 20:05:48.534: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Oct 6 20:05:52.912: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Oct 6 20:05:52.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2342" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":278,"completed":8,"skipped":87,"failed":0} SS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Oct 6 20:05:52.981: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-secret-7jvj STEP: Creating a pod to test atomic-volume-subpath Oct 6 20:05:53.110: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-7jvj" in namespace "subpath-9632" to be "success or failure" Oct 6 20:05:53.141: INFO: Pod "pod-subpath-test-secret-7jvj": Phase="Pending", Reason="", readiness=false. Elapsed: 30.978754ms Oct 6 20:05:55.213: INFO: Pod "pod-subpath-test-secret-7jvj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.102728639s Oct 6 20:05:57.219: INFO: Pod "pod-subpath-test-secret-7jvj": Phase="Running", Reason="", readiness=true. Elapsed: 4.109284976s Oct 6 20:05:59.267: INFO: Pod "pod-subpath-test-secret-7jvj": Phase="Running", Reason="", readiness=true. Elapsed: 6.15725274s Oct 6 20:06:01.279: INFO: Pod "pod-subpath-test-secret-7jvj": Phase="Running", Reason="", readiness=true. Elapsed: 8.169158973s Oct 6 20:06:03.286: INFO: Pod "pod-subpath-test-secret-7jvj": Phase="Running", Reason="", readiness=true. Elapsed: 10.175904965s Oct 6 20:06:05.303: INFO: Pod "pod-subpath-test-secret-7jvj": Phase="Running", Reason="", readiness=true. Elapsed: 12.192577889s Oct 6 20:06:07.309: INFO: Pod "pod-subpath-test-secret-7jvj": Phase="Running", Reason="", readiness=true. Elapsed: 14.198703883s Oct 6 20:06:09.316: INFO: Pod "pod-subpath-test-secret-7jvj": Phase="Running", Reason="", readiness=true. Elapsed: 16.2053316s Oct 6 20:06:11.322: INFO: Pod "pod-subpath-test-secret-7jvj": Phase="Running", Reason="", readiness=true. Elapsed: 18.212073536s Oct 6 20:06:13.329: INFO: Pod "pod-subpath-test-secret-7jvj": Phase="Running", Reason="", readiness=true. Elapsed: 20.218648505s Oct 6 20:06:15.336: INFO: Pod "pod-subpath-test-secret-7jvj": Phase="Running", Reason="", readiness=true. Elapsed: 22.225767781s Oct 6 20:06:17.346: INFO: Pod "pod-subpath-test-secret-7jvj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.235464397s STEP: Saw pod success Oct 6 20:06:17.346: INFO: Pod "pod-subpath-test-secret-7jvj" satisfied condition "success or failure" Oct 6 20:06:17.351: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-secret-7jvj container test-container-subpath-secret-7jvj: STEP: delete the pod Oct 6 20:06:17.388: INFO: Waiting for pod pod-subpath-test-secret-7jvj to disappear Oct 6 20:06:17.398: INFO: Pod pod-subpath-test-secret-7jvj no longer exists STEP: Deleting pod pod-subpath-test-secret-7jvj Oct 6 20:06:17.398: INFO: Deleting pod "pod-subpath-test-secret-7jvj" in namespace "subpath-9632" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Oct 6 20:06:17.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9632" for this suite. • [SLOW TEST:24.461 seconds] [sig-storage] Subpath /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":278,"completed":9,"skipped":89,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Oct 6 20:06:17.446: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Oct 6 20:06:17.577: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bcdb2f89-b94c-4503-bcd1-d2bab59502cc" in namespace "downward-api-7141" to be "success or failure" Oct 6 20:06:17.621: INFO: Pod "downwardapi-volume-bcdb2f89-b94c-4503-bcd1-d2bab59502cc": Phase="Pending", Reason="", readiness=false. Elapsed: 43.069944ms Oct 6 20:06:19.699: INFO: Pod "downwardapi-volume-bcdb2f89-b94c-4503-bcd1-d2bab59502cc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.121435842s Oct 6 20:06:21.706: INFO: Pod "downwardapi-volume-bcdb2f89-b94c-4503-bcd1-d2bab59502cc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.128861543s STEP: Saw pod success Oct 6 20:06:21.707: INFO: Pod "downwardapi-volume-bcdb2f89-b94c-4503-bcd1-d2bab59502cc" satisfied condition "success or failure" Oct 6 20:06:21.719: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-bcdb2f89-b94c-4503-bcd1-d2bab59502cc container client-container: STEP: delete the pod Oct 6 20:06:21.775: INFO: Waiting for pod downwardapi-volume-bcdb2f89-b94c-4503-bcd1-d2bab59502cc to disappear Oct 6 20:06:21.793: INFO: Pod downwardapi-volume-bcdb2f89-b94c-4503-bcd1-d2bab59502cc no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Oct 6 20:06:21.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7141" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":10,"skipped":121,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Oct 6 20:06:21.825: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Oct 6 20:06:21.907: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Oct 6 20:06:22.009: INFO: Waiting for terminating namespaces to be deleted... Oct 6 20:06:22.017: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Oct 6 20:06:22.029: INFO: kube-proxy-knc9b from kube-system started at 2020-09-23 08:27:39 +0000 UTC (1 container statuses recorded) Oct 6 20:06:22.030: INFO: Container kube-proxy ready: true, restart count 0 Oct 6 20:06:22.030: INFO: kindnet-nlsvd from kube-system started at 2020-09-23 08:27:39 +0000 UTC (1 container statuses recorded) Oct 6 20:06:22.030: INFO: Container kindnet-cni ready: true, restart count 0 Oct 6 20:06:22.030: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Oct 6 20:06:22.037: INFO: kindnet-5wksn from kube-system started at 2020-09-23 08:27:38 +0000 UTC (1 container statuses recorded) Oct 6 20:06:22.037: INFO: Container kindnet-cni ready: true, restart count 0 Oct 6 20:06:22.037: INFO: kube-proxy-jgndm from kube-system started at 2020-09-23 08:27:38 +0000 UTC (1 container statuses recorded) Oct 6 20:06:22.037: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-c3e05ac2-8e16-4ed3-84cd-ea7c7f9cca94 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-c3e05ac2-8e16-4ed3-84cd-ea7c7f9cca94 off the node jerma-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-c3e05ac2-8e16-4ed3-84cd-ea7c7f9cca94 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Oct 6 20:06:30.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9996" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:8.966 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":278,"completed":11,"skipped":135,"failed":0} SSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Deprecated] [Conformance] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Oct 6 20:06:30.793: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [BeforeEach] Kubectl rolling-update /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1587 [It] should support rolling-update to same image [Deprecated] [Conformance] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Oct 6 20:06:30.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-3317' Oct 6 20:06:34.751: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Oct 6 20:06:34.752: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created STEP: rolling-update to same image controller Oct 6 20:06:34.792: INFO: scanned /root for discovery docs: Oct 6 20:06:34.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-httpd-rc --update-period=1s --image=docker.io/library/httpd:2.4.38-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-3317' Oct 6 20:06:53.247: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Oct 6 20:06:53.247: INFO: stdout: "Created e2e-test-httpd-rc-069b4513e2f0ead7b48bd375e5aa5e4a\nScaling up e2e-test-httpd-rc-069b4513e2f0ead7b48bd375e5aa5e4a from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-069b4513e2f0ead7b48bd375e5aa5e4a up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-069b4513e2f0ead7b48bd375e5aa5e4a to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" Oct 6 20:06:53.248: INFO: stdout: "Created e2e-test-httpd-rc-069b4513e2f0ead7b48bd375e5aa5e4a\nScaling up e2e-test-httpd-rc-069b4513e2f0ead7b48bd375e5aa5e4a from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-069b4513e2f0ead7b48bd375e5aa5e4a up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-069b4513e2f0ead7b48bd375e5aa5e4a to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-httpd-rc pods to come up. Oct 6 20:06:53.254: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-3317' Oct 6 20:06:54.506: INFO: stderr: "" Oct 6 20:06:54.507: INFO: stdout: "e2e-test-httpd-rc-069b4513e2f0ead7b48bd375e5aa5e4a-5fjr7 " Oct 6 20:06:54.507: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-069b4513e2f0ead7b48bd375e5aa5e4a-5fjr7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-httpd-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3317' Oct 6 20:06:55.790: INFO: stderr: "" Oct 6 20:06:55.791: INFO: stdout: "true" Oct 6 20:06:55.791: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-069b4513e2f0ead7b48bd375e5aa5e4a-5fjr7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-httpd-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3317' Oct 6 20:06:57.083: INFO: stderr: "" Oct 6 20:06:57.083: INFO: stdout: "docker.io/library/httpd:2.4.38-alpine" Oct 6 20:06:57.084: INFO: e2e-test-httpd-rc-069b4513e2f0ead7b48bd375e5aa5e4a-5fjr7 is verified up and running [AfterEach] Kubectl rolling-update /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1593 Oct 6 20:06:57.085: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-3317' Oct 6 20:06:58.332: INFO: stderr: "" Oct 6 20:06:58.332: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Oct 6 20:06:58.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3317" for this suite. • [SLOW TEST:27.553 seconds] [sig-cli] Kubectl client /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl rolling-update /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1582 should support rolling-update to same image [Deprecated] [Conformance] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Deprecated] [Conformance]","total":278,"completed":12,"skipped":141,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Oct 6 20:06:58.349: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W1006 20:07:10.254339 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Oct 6 20:07:10.254: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Oct 6 20:07:10.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6895" for this suite. • [SLOW TEST:11.920 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":278,"completed":13,"skipped":189,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Oct 6 20:07:10.271: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-5026 STEP: creating a selector STEP: Creating the service pods in kubernetes Oct 6 20:07:10.362: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Oct 6 20:07:34.566: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.118:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5026 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 6 20:07:34.566: INFO: >>> kubeConfig: /root/.kube/config I1006 20:07:34.628390 7 log.go:172] (0x4002cb46e0) (0x40027fd400) Create stream I1006 20:07:34.628539 7 log.go:172] (0x4002cb46e0) (0x40027fd400) Stream added, broadcasting: 1 I1006 20:07:34.633259 7 log.go:172] (0x4002cb46e0) Reply frame received for 1 I1006 20:07:34.633489 7 log.go:172] (0x4002cb46e0) (0x400248cf00) Create stream I1006 20:07:34.633610 7 log.go:172] (0x4002cb46e0) (0x400248cf00) Stream added, broadcasting: 3 I1006 20:07:34.635553 7 log.go:172] (0x4002cb46e0) Reply frame received for 3 I1006 20:07:34.635760 7 log.go:172] (0x4002cb46e0) (0x400248cfa0) Create stream I1006 20:07:34.635855 7 log.go:172] (0x4002cb46e0) (0x400248cfa0) Stream added, broadcasting: 5 I1006 20:07:34.637860 7 log.go:172] (0x4002cb46e0) Reply frame received for 5 I1006 20:07:34.777518 7 log.go:172] (0x4002cb46e0) Data frame received for 3 I1006 20:07:34.777774 7 log.go:172] (0x400248cf00) (3) Data frame handling I1006 20:07:34.778051 7 log.go:172] (0x4002cb46e0) Data frame received for 5 I1006 20:07:34.778249 7 log.go:172] (0x400248cfa0) (5) Data frame handling I1006 20:07:34.778531 7 log.go:172] (0x400248cf00) (3) Data frame sent I1006 20:07:34.778792 7 log.go:172] (0x4002cb46e0) Data frame received for 3 I1006 20:07:34.778904 7 log.go:172] (0x400248cf00) (3) Data frame handling I1006 20:07:34.779762 7 log.go:172] (0x4002cb46e0) Data frame received for 1 I1006 20:07:34.779831 7 log.go:172] (0x40027fd400) (1) Data frame handling I1006 20:07:34.779904 7 log.go:172] (0x40027fd400) (1) Data frame sent I1006 20:07:34.779981 7 log.go:172] (0x4002cb46e0) (0x40027fd400) Stream removed, broadcasting: 1 I1006 20:07:34.780070 7 log.go:172] (0x4002cb46e0) Go away received I1006 20:07:34.780568 7 log.go:172] (0x4002cb46e0) (0x40027fd400) Stream removed, broadcasting: 1 I1006 20:07:34.780740 7 log.go:172] (0x4002cb46e0) (0x400248cf00) Stream removed, broadcasting: 3 I1006 20:07:34.780968 7 log.go:172] (0x4002cb46e0) (0x400248cfa0) Stream removed, broadcasting: 5 Oct 6 20:07:34.782: INFO: Found all expected endpoints: [netserver-0] Oct 6 20:07:34.787: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.227:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5026 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 6 20:07:34.787: INFO: >>> kubeConfig: /root/.kube/config I1006 20:07:34.850414 7 log.go:172] (0x400291b810) (0x4002518960) Create stream I1006 20:07:34.850626 7 log.go:172] (0x400291b810) (0x4002518960) Stream added, broadcasting: 1 I1006 20:07:34.859664 7 log.go:172] (0x400291b810) Reply frame received for 1 I1006 20:07:34.860254 7 log.go:172] (0x400291b810) (0x4002518a00) Create stream I1006 20:07:34.860520 7 log.go:172] (0x400291b810) (0x4002518a00) Stream added, broadcasting: 3 I1006 20:07:34.862816 7 log.go:172] (0x400291b810) Reply frame received for 3 I1006 20:07:34.863014 7 log.go:172] (0x400291b810) (0x40027fd4a0) Create stream I1006 20:07:34.863096 7 log.go:172] (0x400291b810) (0x40027fd4a0) Stream added, broadcasting: 5 I1006 20:07:34.864481 7 log.go:172] (0x400291b810) Reply frame received for 5 I1006 20:07:34.920746 7 log.go:172] (0x400291b810) Data frame received for 5 I1006 20:07:34.921018 7 log.go:172] (0x40027fd4a0) (5) Data frame handling I1006 20:07:34.921198 7 log.go:172] (0x400291b810) Data frame received for 3 I1006 20:07:34.921360 7 log.go:172] (0x4002518a00) (3) Data frame handling I1006 20:07:34.921529 7 log.go:172] (0x4002518a00) (3) Data frame sent I1006 20:07:34.921629 7 log.go:172] (0x400291b810) Data frame received for 3 I1006 20:07:34.921711 7 log.go:172] (0x4002518a00) (3) Data frame handling I1006 20:07:34.922557 7 log.go:172] (0x400291b810) Data frame received for 1 I1006 20:07:34.922713 7 log.go:172] (0x4002518960) (1) Data frame handling I1006 20:07:34.922892 7 log.go:172] (0x4002518960) (1) Data frame sent I1006 20:07:34.923068 7 log.go:172] (0x400291b810) (0x4002518960) Stream removed, broadcasting: 1 I1006 20:07:34.923260 7 log.go:172] (0x400291b810) Go away received I1006 20:07:34.923680 7 log.go:172] (0x400291b810) (0x4002518960) Stream removed, broadcasting: 1 I1006 20:07:34.923858 7 log.go:172] (0x400291b810) (0x4002518a00) Stream removed, broadcasting: 3 I1006 20:07:34.924045 7 log.go:172] (0x400291b810) (0x40027fd4a0) Stream removed, broadcasting: 5 Oct 6 20:07:34.924: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Oct 6 20:07:34.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-5026" for this suite. • [SLOW TEST:24.669 seconds] [sig-network] Networking /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":14,"skipped":201,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Oct 6 20:07:34.942: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should contain environment variables for services [NodeConformance] [Conformance] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Oct 6 20:07:39.105: INFO: Waiting up to 5m0s for pod "client-envvars-fd3587d9-1d10-4037-865d-942212948a22" in namespace "pods-6147" to be "success or failure" Oct 6 20:07:39.167: INFO: Pod "client-envvars-fd3587d9-1d10-4037-865d-942212948a22": Phase="Pending", Reason="", readiness=false. Elapsed: 61.492702ms Oct 6 20:07:41.193: INFO: Pod "client-envvars-fd3587d9-1d10-4037-865d-942212948a22": Phase="Pending", Reason="", readiness=false. Elapsed: 2.087633646s Oct 6 20:07:43.200: INFO: Pod "client-envvars-fd3587d9-1d10-4037-865d-942212948a22": Phase="Pending", Reason="", readiness=false. Elapsed: 4.094482228s Oct 6 20:07:45.206: INFO: Pod "client-envvars-fd3587d9-1d10-4037-865d-942212948a22": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.100898667s STEP: Saw pod success Oct 6 20:07:45.206: INFO: Pod "client-envvars-fd3587d9-1d10-4037-865d-942212948a22" satisfied condition "success or failure" Oct 6 20:07:45.211: INFO: Trying to get logs from node jerma-worker pod client-envvars-fd3587d9-1d10-4037-865d-942212948a22 container env3cont: STEP: delete the pod Oct 6 20:07:45.252: INFO: Waiting for pod client-envvars-fd3587d9-1d10-4037-865d-942212948a22 to disappear Oct 6 20:07:45.258: INFO: Pod client-envvars-fd3587d9-1d10-4037-865d-942212948a22 no longer exists [AfterEach] [k8s.io] Pods /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Oct 6 20:07:45.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6147" for this suite. • [SLOW TEST:10.328 seconds] [k8s.io] Pods /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should contain environment variables for services [NodeConformance] [Conformance] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":278,"completed":15,"skipped":209,"failed":0} SSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Oct 6 20:07:45.271: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Oct 6 20:07:49.932: INFO: Successfully updated pod "pod-update-activedeadlineseconds-d81eb8cb-bef9-4198-a4ff-4ab269b975ac" Oct 6 20:07:49.933: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-d81eb8cb-bef9-4198-a4ff-4ab269b975ac" in namespace "pods-1735" to be "terminated due to deadline exceeded" Oct 6 20:07:49.937: INFO: Pod "pod-update-activedeadlineseconds-d81eb8cb-bef9-4198-a4ff-4ab269b975ac": Phase="Running", Reason="", readiness=true. Elapsed: 3.738376ms Oct 6 20:07:51.944: INFO: Pod "pod-update-activedeadlineseconds-d81eb8cb-bef9-4198-a4ff-4ab269b975ac": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.011009553s Oct 6 20:07:51.944: INFO: Pod "pod-update-activedeadlineseconds-d81eb8cb-bef9-4198-a4ff-4ab269b975ac" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Oct 6 20:07:51.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1735" for this suite. • [SLOW TEST:6.686 seconds] [k8s.io] Pods /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":278,"completed":16,"skipped":212,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Oct 6 20:07:51.959: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 6 20:07:53.666: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 6 20:07:55.688: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737611673, loc:(*time.Location)(0x7271fa0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737611673, loc:(*time.Location)(0x7271fa0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737611673, loc:(*time.Location)(0x7271fa0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737611673, loc:(*time.Location)(0x7271fa0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 6 20:07:58.793: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Oct 6 20:07:58.805: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-4540-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Oct 6 20:07:59.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2664" for this suite. STEP: Destroying namespace "webhook-2664-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.775 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":278,"completed":17,"skipped":221,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Oct 6 20:07:59.735: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-57c8a1ad-842f-4e70-b060-339517dd8805 STEP: Creating a pod to test consume secrets Oct 6 20:07:59.810: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-7f4ed0d4-12d6-4d58-ac27-03c3fee6bd7a" in namespace "projected-5584" to be "success or failure" Oct 6 20:07:59.838: INFO: Pod "pod-projected-secrets-7f4ed0d4-12d6-4d58-ac27-03c3fee6bd7a": Phase="Pending", Reason="", readiness=false. Elapsed: 28.043576ms Oct 6 20:08:01.845: INFO: Pod "pod-projected-secrets-7f4ed0d4-12d6-4d58-ac27-03c3fee6bd7a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034938954s Oct 6 20:08:04.048: INFO: Pod "pod-projected-secrets-7f4ed0d4-12d6-4d58-ac27-03c3fee6bd7a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.238557376s Oct 6 20:08:06.056: INFO: Pod "pod-projected-secrets-7f4ed0d4-12d6-4d58-ac27-03c3fee6bd7a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.245819694s STEP: Saw pod success Oct 6 20:08:06.056: INFO: Pod "pod-projected-secrets-7f4ed0d4-12d6-4d58-ac27-03c3fee6bd7a" satisfied condition "success or failure" Oct 6 20:08:06.083: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-7f4ed0d4-12d6-4d58-ac27-03c3fee6bd7a container projected-secret-volume-test: STEP: delete the pod Oct 6 20:08:06.151: INFO: Waiting for pod pod-projected-secrets-7f4ed0d4-12d6-4d58-ac27-03c3fee6bd7a to disappear Oct 6 20:08:06.159: INFO: Pod pod-projected-secrets-7f4ed0d4-12d6-4d58-ac27-03c3fee6bd7a no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Oct 6 20:08:06.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5584" for this suite. • [SLOW TEST:6.439 seconds] [sig-storage] Projected secret /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":18,"skipped":231,"failed":0} SSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Oct 6 20:08:06.176: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Oct 6 20:08:10.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6050" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":19,"skipped":238,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Oct 6 20:08:10.320: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-d4868b89-5939-4e24-a684-da1ebca6d80f STEP: Creating a pod to test consume configMaps Oct 6 20:08:10.390: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6e03c9fb-1707-45d9-afb2-f6a160ff1fda" in namespace "projected-6316" to be "success or failure" Oct 6 20:08:10.395: INFO: Pod "pod-projected-configmaps-6e03c9fb-1707-45d9-afb2-f6a160ff1fda": Phase="Pending", Reason="", readiness=false. Elapsed: 4.566797ms Oct 6 20:08:12.582: INFO: Pod "pod-projected-configmaps-6e03c9fb-1707-45d9-afb2-f6a160ff1fda": Phase="Pending", Reason="", readiness=false. Elapsed: 2.191992585s Oct 6 20:08:14.589: INFO: Pod "pod-projected-configmaps-6e03c9fb-1707-45d9-afb2-f6a160ff1fda": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.199284954s STEP: Saw pod success Oct 6 20:08:14.590: INFO: Pod "pod-projected-configmaps-6e03c9fb-1707-45d9-afb2-f6a160ff1fda" satisfied condition "success or failure" Oct 6 20:08:14.595: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-6e03c9fb-1707-45d9-afb2-f6a160ff1fda container projected-configmap-volume-test: STEP: delete the pod Oct 6 20:08:14.639: INFO: Waiting for pod pod-projected-configmaps-6e03c9fb-1707-45d9-afb2-f6a160ff1fda to disappear Oct 6 20:08:14.658: INFO: Pod pod-projected-configmaps-6e03c9fb-1707-45d9-afb2-f6a160ff1fda no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Oct 6 20:08:14.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6316" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":20,"skipped":247,"failed":0} SSSSSSSSSSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Oct 6 20:08:14.674: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Oct 6 20:08:21.378: INFO: Successfully updated pod "adopt-release-2dkrw" STEP: Checking that the Job readopts the Pod Oct 6 20:08:21.378: INFO: Waiting up to 15m0s for pod "adopt-release-2dkrw" in namespace "job-2167" to be "adopted" Oct 6 20:08:21.394: INFO: Pod "adopt-release-2dkrw": Phase="Running", Reason="", readiness=true. Elapsed: 15.455156ms Oct 6 20:08:23.401: INFO: Pod "adopt-release-2dkrw": Phase="Running", Reason="", readiness=true. Elapsed: 2.022840312s Oct 6 20:08:23.402: INFO: Pod "adopt-release-2dkrw" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Oct 6 20:08:23.918: INFO: Successfully updated pod "adopt-release-2dkrw" STEP: Checking that the Job releases the Pod Oct 6 20:08:23.919: INFO: Waiting up to 15m0s for pod "adopt-release-2dkrw" in namespace "job-2167" to be "released" Oct 6 20:08:23.945: INFO: Pod "adopt-release-2dkrw": Phase="Running", Reason="", readiness=true. Elapsed: 26.284835ms Oct 6 20:08:23.945: INFO: Pod "adopt-release-2dkrw" satisfied condition "released" [AfterEach] [sig-apps] Job /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Oct 6 20:08:23.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-2167" for this suite. • [SLOW TEST:9.339 seconds] [sig-apps] Job /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":278,"completed":21,"skipped":258,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Oct 6 20:08:24.014: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Oct 6 20:08:24.180: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"3a329513-1fea-4241-8a62-864dbe42daf5", Controller:(*bool)(0x4002e80e0a), BlockOwnerDeletion:(*bool)(0x4002e80e0b)}} Oct 6 20:08:24.221: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"c28a444b-aa27-4591-83ea-2f659dfbc01d", Controller:(*bool)(0x4002a62cea), BlockOwnerDeletion:(*bool)(0x4002a62ceb)}} Oct 6 20:08:24.337: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"5af6898c-e6c7-404a-bb7a-3fa973cd6141", Controller:(*bool)(0x4002e81172), BlockOwnerDeletion:(*bool)(0x4002e81173)}} [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Oct 6 20:08:29.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8759" for this suite. • [SLOW TEST:5.370 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":278,"completed":22,"skipped":266,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Oct 6 20:08:29.386: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-4b59fc01-77ab-4beb-af15-b12f18a3622c STEP: Creating a pod to test consume secrets Oct 6 20:08:29.463: INFO: Waiting up to 5m0s for pod "pod-secrets-5d6eb758-110a-45e5-ac36-e1914b059687" in namespace "secrets-493" to be "success or failure" Oct 6 20:08:29.469: INFO: Pod "pod-secrets-5d6eb758-110a-45e5-ac36-e1914b059687": Phase="Pending", Reason="", readiness=false. Elapsed: 5.484061ms Oct 6 20:08:31.482: INFO: Pod "pod-secrets-5d6eb758-110a-45e5-ac36-e1914b059687": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017869011s Oct 6 20:08:33.488: INFO: Pod "pod-secrets-5d6eb758-110a-45e5-ac36-e1914b059687": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024021371s STEP: Saw pod success Oct 6 20:08:33.488: INFO: Pod "pod-secrets-5d6eb758-110a-45e5-ac36-e1914b059687" satisfied condition "success or failure" Oct 6 20:08:33.492: INFO: Trying to get logs from node jerma-worker pod pod-secrets-5d6eb758-110a-45e5-ac36-e1914b059687 container secret-volume-test: STEP: delete the pod Oct 6 20:08:33.513: INFO: Waiting for pod pod-secrets-5d6eb758-110a-45e5-ac36-e1914b059687 to disappear Oct 6 20:08:33.517: INFO: Pod pod-secrets-5d6eb758-110a-45e5-ac36-e1914b059687 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Oct 6 20:08:33.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-493" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":23,"skipped":290,"failed":0} SS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Oct 6 20:08:33.529: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-9f46978c-ac93-4beb-8c96-23238375baed STEP: Creating a pod to test consume configMaps Oct 6 20:08:33.605: INFO: Waiting up to 5m0s for pod "pod-configmaps-6c02f6e0-4e3d-4781-b6b7-1c058c624645" in namespace "configmap-7530" to be "success or failure" Oct 6 20:08:33.623: INFO: Pod "pod-configmaps-6c02f6e0-4e3d-4781-b6b7-1c058c624645": Phase="Pending", Reason="", readiness=false. Elapsed: 17.343206ms Oct 6 20:08:35.635: INFO: Pod "pod-configmaps-6c02f6e0-4e3d-4781-b6b7-1c058c624645": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029641947s Oct 6 20:08:37.643: INFO: Pod "pod-configmaps-6c02f6e0-4e3d-4781-b6b7-1c058c624645": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037224688s STEP: Saw pod success Oct 6 20:08:37.643: INFO: Pod "pod-configmaps-6c02f6e0-4e3d-4781-b6b7-1c058c624645" satisfied condition "success or failure" Oct 6 20:08:37.648: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-6c02f6e0-4e3d-4781-b6b7-1c058c624645 container configmap-volume-test: STEP: delete the pod Oct 6 20:08:37.672: INFO: Waiting for pod pod-configmaps-6c02f6e0-4e3d-4781-b6b7-1c058c624645 to disappear Oct 6 20:08:37.676: INFO: Pod pod-configmaps-6c02f6e0-4e3d-4781-b6b7-1c058c624645 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Oct 6 20:08:37.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7530" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":24,"skipped":292,"failed":0} SSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Oct 6 20:08:37.688: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-3317 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a new StatefulSet Oct 6 20:08:37.863: INFO: Found 0 stateful pods, waiting for 3 Oct 6 20:08:47.895: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Oct 6 20:08:47.895: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Oct 6 20:08:47.895: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Oct 6 20:08:57.873: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Oct 6 20:08:57.873: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Oct 6 20:08:57.873: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Oct 6 20:08:57.911: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Oct 6 20:09:07.982: INFO: Updating stateful set ss2 Oct 6 20:09:08.141: INFO: Waiting for Pod statefulset-3317/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted Oct 6 20:09:18.346: INFO: Found 2 stateful pods, waiting for 3 Oct 6 20:09:28.354: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Oct 6 20:09:28.354: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Oct 6 20:09:28.354: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Oct 6 20:09:28.387: INFO: Updating stateful set ss2 Oct 6 20:09:28.427: INFO: Waiting for Pod statefulset-3317/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Oct 6 20:09:38.464: INFO: Updating stateful set ss2 Oct 6 20:09:38.531: INFO: Waiting for StatefulSet statefulset-3317/ss2 to complete update Oct 6 20:09:38.531: INFO: Waiting for Pod statefulset-3317/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Oct 6 20:09:48.545: INFO: Deleting all statefulset in ns statefulset-3317 Oct 6 20:09:48.551: INFO: Scaling statefulset ss2 to 0 Oct 6 20:10:18.576: INFO: Waiting for statefulset status.replicas updated to 0 Oct 6 20:10:18.581: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Oct 6 20:10:18.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3317" for this suite. • [SLOW TEST:100.940 seconds] [sig-apps] StatefulSet /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":278,"completed":25,"skipped":298,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Oct 6 20:10:18.630: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Oct 6 20:10:18.715: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8f4bd32d-4143-44f8-899d-69cac70b168c" in namespace "downward-api-9496" to be "success or failure" Oct 6 20:10:18.739: INFO: Pod "downwardapi-volume-8f4bd32d-4143-44f8-899d-69cac70b168c": Phase="Pending", Reason="", readiness=false. Elapsed: 23.778174ms Oct 6 20:10:20.745: INFO: Pod "downwardapi-volume-8f4bd32d-4143-44f8-899d-69cac70b168c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030300943s Oct 6 20:10:22.752: INFO: Pod "downwardapi-volume-8f4bd32d-4143-44f8-899d-69cac70b168c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03727485s STEP: Saw pod success Oct 6 20:10:22.753: INFO: Pod "downwardapi-volume-8f4bd32d-4143-44f8-899d-69cac70b168c" satisfied condition "success or failure" Oct 6 20:10:22.757: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-8f4bd32d-4143-44f8-899d-69cac70b168c container client-container: STEP: delete the pod Oct 6 20:10:22.824: INFO: Waiting for pod downwardapi-volume-8f4bd32d-4143-44f8-899d-69cac70b168c to disappear Oct 6 20:10:22.830: INFO: Pod downwardapi-volume-8f4bd32d-4143-44f8-899d-69cac70b168c no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Oct 6 20:10:22.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9496" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":26,"skipped":312,"failed":0} SSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Oct 6 20:10:22.845: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating replication controller my-hostname-basic-2d253888-c4e0-41f1-af38-441157eac407 Oct 6 20:10:22.970: INFO: Pod name my-hostname-basic-2d253888-c4e0-41f1-af38-441157eac407: Found 0 pods out of 1 Oct 6 20:10:27.977: INFO: Pod name my-hostname-basic-2d253888-c4e0-41f1-af38-441157eac407: Found 1 pods out of 1 Oct 6 20:10:27.977: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-2d253888-c4e0-41f1-af38-441157eac407" are running Oct 6 20:10:27.982: INFO: Pod "my-hostname-basic-2d253888-c4e0-41f1-af38-441157eac407-zstmg" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-10-06 20:10:23 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-10-06 20:10:26 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-10-06 20:10:26 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-10-06 20:10:22 +0000 UTC Reason: Message:}]) Oct 6 20:10:27.982: INFO: Trying to dial the pod Oct 6 20:10:33.010: INFO: Controller my-hostname-basic-2d253888-c4e0-41f1-af38-441157eac407: Got expected result from replica 1 [my-hostname-basic-2d253888-c4e0-41f1-af38-441157eac407-zstmg]: "my-hostname-basic-2d253888-c4e0-41f1-af38-441157eac407-zstmg", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Oct 6 20:10:33.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-9410" for this suite. • [SLOW TEST:10.178 seconds] [sig-apps] ReplicationController /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":27,"skipped":322,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Oct 6 20:10:33.025: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Oct 6 20:10:46.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8699" for this suite. • [SLOW TEST:13.226 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":278,"completed":28,"skipped":349,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Oct 6 20:10:46.252: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name cm-test-opt-del-742132a6-c4b7-4777-9a1a-4d32a081b9c0 STEP: Creating configMap with name cm-test-opt-upd-6c6a0ca7-fd96-4f3e-8cad-733289bde2f8 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-742132a6-c4b7-4777-9a1a-4d32a081b9c0 STEP: Updating configmap cm-test-opt-upd-6c6a0ca7-fd96-4f3e-8cad-733289bde2f8 STEP: Creating configMap with name cm-test-opt-create-28fd0ebb-ef5f-4ece-9510-555487b19900 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Oct 6 20:12:23.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8189" for this suite. • [SLOW TEST:97.157 seconds] [sig-storage] ConfigMap /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":29,"skipped":361,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Oct 6 20:12:23.410: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Oct 6 20:12:23.488: INFO: Waiting up to 5m0s for pod "downwardapi-volume-130e5dfd-5ae2-45bc-a107-454c68d438ec" in namespace "downward-api-8224" to be "success or failure" Oct 6 20:12:23.500: INFO: Pod "downwardapi-volume-130e5dfd-5ae2-45bc-a107-454c68d438ec": Phase="Pending", Reason="", readiness=false. Elapsed: 11.87529ms Oct 6 20:12:25.507: INFO: Pod "downwardapi-volume-130e5dfd-5ae2-45bc-a107-454c68d438ec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018901738s Oct 6 20:12:27.514: INFO: Pod "downwardapi-volume-130e5dfd-5ae2-45bc-a107-454c68d438ec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026016435s STEP: Saw pod success Oct 6 20:12:27.514: INFO: Pod "downwardapi-volume-130e5dfd-5ae2-45bc-a107-454c68d438ec" satisfied condition "success or failure" Oct 6 20:12:27.519: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-130e5dfd-5ae2-45bc-a107-454c68d438ec container client-container: STEP: delete the pod Oct 6 20:12:27.566: INFO: Waiting for pod downwardapi-volume-130e5dfd-5ae2-45bc-a107-454c68d438ec to disappear Oct 6 20:12:27.614: INFO: Pod downwardapi-volume-130e5dfd-5ae2-45bc-a107-454c68d438ec no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Oct 6 20:12:27.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8224" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":278,"completed":30,"skipped":373,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Oct 6 20:12:27.628: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Oct 6 20:12:27.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5211" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":278,"completed":31,"skipped":392,"failed":0} S ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Oct 6 20:12:27.755: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-7600 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-7600 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-7600 Oct 6 20:12:27.903: INFO: Found 0 stateful pods, waiting for 1 Oct 6 20:12:37.911: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Oct 6 20:12:37.919: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7600 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Oct 6 20:12:39.435: INFO: stderr: "I1006 20:12:39.313738 186 log.go:172] (0x40000160b0) (0x4000819cc0) Create stream\nI1006 20:12:39.316393 186 log.go:172] (0x40000160b0) (0x4000819cc0) Stream added, broadcasting: 1\nI1006 20:12:39.328896 186 log.go:172] (0x40000160b0) Reply frame received for 1\nI1006 20:12:39.329902 186 log.go:172] (0x40000160b0) (0x4000819d60) Create stream\nI1006 20:12:39.329992 186 log.go:172] (0x40000160b0) (0x4000819d60) Stream added, broadcasting: 3\nI1006 20:12:39.331473 186 log.go:172] (0x40000160b0) Reply frame received for 3\nI1006 20:12:39.331698 186 log.go:172] (0x40000160b0) (0x400077e000) Create stream\nI1006 20:12:39.331759 186 log.go:172] (0x40000160b0) (0x400077e000) Stream added, broadcasting: 5\nI1006 20:12:39.333088 186 log.go:172] (0x40000160b0) Reply frame received for 5\nI1006 20:12:39.392813 186 log.go:172] (0x40000160b0) Data frame received for 5\nI1006 20:12:39.393090 186 log.go:172] (0x400077e000) (5) Data frame handling\nI1006 20:12:39.393490 186 log.go:172] (0x400077e000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI1006 20:12:39.418099 186 log.go:172] (0x40000160b0) Data frame received for 3\nI1006 20:12:39.418248 186 log.go:172] (0x40000160b0) Data frame received for 5\nI1006 20:12:39.418410 186 log.go:172] (0x400077e000) (5) Data frame handling\nI1006 20:12:39.418635 186 log.go:172] (0x4000819d60) (3) Data frame handling\nI1006 20:12:39.418802 186 log.go:172] (0x4000819d60) (3) Data frame sent\nI1006 20:12:39.418968 186 log.go:172] (0x40000160b0) Data frame received for 3\nI1006 20:12:39.419081 186 log.go:172] (0x4000819d60) (3) Data frame handling\nI1006 20:12:39.420212 186 log.go:172] (0x40000160b0) Data frame received for 1\nI1006 20:12:39.420351 186 log.go:172] (0x4000819cc0) (1) Data frame handling\nI1006 20:12:39.420479 186 log.go:172] (0x4000819cc0) (1) Data frame sent\nI1006 20:12:39.422037 186 log.go:172] (0x40000160b0) (0x4000819cc0) Stream removed, broadcasting: 1\nI1006 20:12:39.424435 186 log.go:172] (0x40000160b0) Go away received\nI1006 20:12:39.428815 186 log.go:172] (0x40000160b0) (0x4000819cc0) Stream removed, broadcasting: 1\nI1006 20:12:39.429161 186 log.go:172] (0x40000160b0) (0x4000819d60) Stream removed, broadcasting: 3\nI1006 20:12:39.429349 186 log.go:172] (0x40000160b0) (0x400077e000) Stream removed, broadcasting: 5\n" Oct 6 20:12:39.436: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Oct 6 20:12:39.437: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Oct 6 20:12:39.444: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Oct 6 20:12:49.450: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Oct 6 20:12:49.451: INFO: Waiting for statefulset status.replicas updated to 0 Oct 6 20:12:49.482: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999939578s Oct 6 20:12:50.489: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.982194958s Oct 6 20:12:51.496: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.975366242s Oct 6 20:12:52.504: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.968599893s Oct 6 20:12:53.512: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.960486814s Oct 6 20:12:54.520: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.952231665s Oct 6 20:12:55.528: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.944192513s Oct 6 20:12:56.536: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.936683353s Oct 6 20:12:57.545: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.928121435s Oct 6 20:12:58.552: INFO: Verifying statefulset ss doesn't scale past 1 for another 919.607559ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-7600 Oct 6 20:12:59.561: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7600 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 6 20:13:01.057: INFO: stderr: "I1006 20:13:00.937837 207 log.go:172] (0x4000aa8000) (0x4000a40000) Create stream\nI1006 20:13:00.940815 207 log.go:172] (0x4000aa8000) (0x4000a40000) Stream added, broadcasting: 1\nI1006 20:13:00.954308 207 log.go:172] (0x4000aa8000) Reply frame received for 1\nI1006 20:13:00.954878 207 log.go:172] (0x4000aa8000) (0x4000847ae0) Create stream\nI1006 20:13:00.954938 207 log.go:172] (0x4000aa8000) (0x4000847ae0) Stream added, broadcasting: 3\nI1006 20:13:00.956494 207 log.go:172] (0x4000aa8000) Reply frame received for 3\nI1006 20:13:00.957039 207 log.go:172] (0x4000aa8000) (0x4000847cc0) Create stream\nI1006 20:13:00.957138 207 log.go:172] (0x4000aa8000) (0x4000847cc0) Stream added, broadcasting: 5\nI1006 20:13:00.958953 207 log.go:172] (0x4000aa8000) Reply frame received for 5\nI1006 20:13:01.038769 207 log.go:172] (0x4000aa8000) Data frame received for 3\nI1006 20:13:01.039260 207 log.go:172] (0x4000aa8000) Data frame received for 1\nI1006 20:13:01.039643 207 log.go:172] (0x4000aa8000) Data frame received for 5\nI1006 20:13:01.039804 207 log.go:172] (0x4000a40000) (1) Data frame handling\nI1006 20:13:01.040030 207 log.go:172] (0x4000847cc0) (5) Data frame handling\nI1006 20:13:01.040302 207 log.go:172] (0x4000847ae0) (3) Data frame handling\nI1006 20:13:01.041439 207 log.go:172] (0x4000847ae0) (3) Data frame sent\nI1006 20:13:01.042065 207 log.go:172] (0x4000a40000) (1) Data frame sent\nI1006 20:13:01.042506 207 log.go:172] (0x4000aa8000) Data frame received for 3\nI1006 20:13:01.042601 207 log.go:172] (0x4000847ae0) (3) Data frame handling\nI1006 20:13:01.042702 207 log.go:172] (0x4000847cc0) (5) Data frame sent\nI1006 20:13:01.042844 207 log.go:172] (0x4000aa8000) Data frame received for 5\nI1006 20:13:01.042919 207 log.go:172] (0x4000847cc0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI1006 20:13:01.045220 207 log.go:172] (0x4000aa8000) (0x4000a40000) Stream removed, broadcasting: 1\nI1006 20:13:01.045892 207 log.go:172] (0x4000aa8000) Go away received\nI1006 20:13:01.049985 207 log.go:172] (0x4000aa8000) (0x4000a40000) Stream removed, broadcasting: 1\nI1006 20:13:01.050299 207 log.go:172] (0x4000aa8000) (0x4000847ae0) Stream removed, broadcasting: 3\nI1006 20:13:01.050522 207 log.go:172] (0x4000aa8000) (0x4000847cc0) Stream removed, broadcasting: 5\n" Oct 6 20:13:01.059: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Oct 6 20:13:01.059: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Oct 6 20:13:01.065: INFO: Found 1 stateful pods, waiting for 3 Oct 6 20:13:11.073: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Oct 6 20:13:11.074: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Oct 6 20:13:11.074: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Oct 6 20:13:11.089: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7600 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Oct 6 20:13:12.569: INFO: stderr: "I1006 20:13:12.456392 231 log.go:172] (0x4000122e70) (0x4000a860a0) Create stream\nI1006 20:13:12.459945 231 log.go:172] (0x4000122e70) (0x4000a860a0) Stream added, broadcasting: 1\nI1006 20:13:12.474553 231 log.go:172] (0x4000122e70) Reply frame received for 1\nI1006 20:13:12.475118 231 log.go:172] (0x4000122e70) (0x4000a86140) Create stream\nI1006 20:13:12.475174 231 log.go:172] (0x4000122e70) (0x4000a86140) Stream added, broadcasting: 3\nI1006 20:13:12.476791 231 log.go:172] (0x4000122e70) Reply frame received for 3\nI1006 20:13:12.477359 231 log.go:172] (0x4000122e70) (0x400081b900) Create stream\nI1006 20:13:12.477462 231 log.go:172] (0x4000122e70) (0x400081b900) Stream added, broadcasting: 5\nI1006 20:13:12.479005 231 log.go:172] (0x4000122e70) Reply frame received for 5\nI1006 20:13:12.547476 231 log.go:172] (0x4000122e70) Data frame received for 1\nI1006 20:13:12.547787 231 log.go:172] (0x4000122e70) Data frame received for 3\nI1006 20:13:12.548088 231 log.go:172] (0x4000122e70) Data frame received for 5\nI1006 20:13:12.548268 231 log.go:172] (0x400081b900) (5) Data frame handling\nI1006 20:13:12.548489 231 log.go:172] (0x4000a86140) (3) Data frame handling\nI1006 20:13:12.548777 231 log.go:172] (0x4000a860a0) (1) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI1006 20:13:12.551061 231 log.go:172] (0x400081b900) (5) Data frame sent\nI1006 20:13:12.551208 231 log.go:172] (0x4000a86140) (3) Data frame sent\nI1006 20:13:12.551433 231 log.go:172] (0x4000122e70) Data frame received for 3\nI1006 20:13:12.551620 231 log.go:172] (0x4000a86140) (3) Data frame handling\nI1006 20:13:12.551907 231 log.go:172] (0x4000a860a0) (1) Data frame sent\nI1006 20:13:12.552121 231 log.go:172] (0x4000122e70) Data frame received for 5\nI1006 20:13:12.552267 231 log.go:172] (0x400081b900) (5) Data frame handling\nI1006 20:13:12.553860 231 log.go:172] (0x4000122e70) (0x4000a860a0) Stream removed, broadcasting: 1\nI1006 20:13:12.556935 231 log.go:172] (0x4000122e70) Go away received\nI1006 20:13:12.560273 231 log.go:172] (0x4000122e70) (0x4000a860a0) Stream removed, broadcasting: 1\nI1006 20:13:12.560706 231 log.go:172] (0x4000122e70) (0x4000a86140) Stream removed, broadcasting: 3\nI1006 20:13:12.561082 231 log.go:172] (0x4000122e70) (0x400081b900) Stream removed, broadcasting: 5\n" Oct 6 20:13:12.570: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Oct 6 20:13:12.570: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Oct 6 20:13:12.570: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7600 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Oct 6 20:13:14.043: INFO: stderr: "I1006 20:13:13.896084 253 log.go:172] (0x40007f8b00) (0x4000a0e000) Create stream\nI1006 20:13:13.898447 253 log.go:172] (0x40007f8b00) (0x4000a0e000) Stream added, broadcasting: 1\nI1006 20:13:13.906728 253 log.go:172] (0x40007f8b00) Reply frame received for 1\nI1006 20:13:13.907247 253 log.go:172] (0x40007f8b00) (0x4000a1e000) Create stream\nI1006 20:13:13.907301 253 log.go:172] (0x40007f8b00) (0x4000a1e000) Stream added, broadcasting: 3\nI1006 20:13:13.908535 253 log.go:172] (0x40007f8b00) Reply frame received for 3\nI1006 20:13:13.909074 253 log.go:172] (0x40007f8b00) (0x4000a0e0a0) Create stream\nI1006 20:13:13.909181 253 log.go:172] (0x40007f8b00) (0x4000a0e0a0) Stream added, broadcasting: 5\nI1006 20:13:13.910928 253 log.go:172] (0x40007f8b00) Reply frame received for 5\nI1006 20:13:13.993873 253 log.go:172] (0x40007f8b00) Data frame received for 5\nI1006 20:13:13.994259 253 log.go:172] (0x4000a0e0a0) (5) Data frame handling\nI1006 20:13:13.995121 253 log.go:172] (0x4000a0e0a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI1006 20:13:14.021657 253 log.go:172] (0x40007f8b00) Data frame received for 3\nI1006 20:13:14.021813 253 log.go:172] (0x4000a1e000) (3) Data frame handling\nI1006 20:13:14.021923 253 log.go:172] (0x40007f8b00) Data frame received for 5\nI1006 20:13:14.022015 253 log.go:172] (0x4000a0e0a0) (5) Data frame handling\nI1006 20:13:14.022210 253 log.go:172] (0x4000a1e000) (3) Data frame sent\nI1006 20:13:14.022436 253 log.go:172] (0x40007f8b00) Data frame received for 3\nI1006 20:13:14.022607 253 log.go:172] (0x4000a1e000) (3) Data frame handling\nI1006 20:13:14.023772 253 log.go:172] (0x40007f8b00) Data frame received for 1\nI1006 20:13:14.023875 253 log.go:172] (0x4000a0e000) (1) Data frame handling\nI1006 20:13:14.024001 253 log.go:172] (0x4000a0e000) (1) Data frame sent\nI1006 20:13:14.025936 253 log.go:172] (0x40007f8b00) (0x4000a0e000) Stream removed, broadcasting: 1\nI1006 20:13:14.029841 253 log.go:172] (0x40007f8b00) Go away received\nI1006 20:13:14.034219 253 log.go:172] (0x40007f8b00) (0x4000a0e000) Stream removed, broadcasting: 1\nI1006 20:13:14.034524 253 log.go:172] (0x40007f8b00) (0x4000a1e000) Stream removed, broadcasting: 3\nI1006 20:13:14.034735 253 log.go:172] (0x40007f8b00) (0x4000a0e0a0) Stream removed, broadcasting: 5\n" Oct 6 20:13:14.043: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Oct 6 20:13:14.044: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Oct 6 20:13:14.044: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7600 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Oct 6 20:13:15.557: INFO: stderr: "I1006 20:13:15.388402 275 log.go:172] (0x40009e6bb0) (0x400077a1e0) Create stream\nI1006 20:13:15.392464 275 log.go:172] (0x40009e6bb0) (0x400077a1e0) Stream added, broadcasting: 1\nI1006 20:13:15.407936 275 log.go:172] (0x40009e6bb0) Reply frame received for 1\nI1006 20:13:15.408707 275 log.go:172] (0x40009e6bb0) (0x4000830000) Create stream\nI1006 20:13:15.408782 275 log.go:172] (0x40009e6bb0) (0x4000830000) Stream added, broadcasting: 3\nI1006 20:13:15.410335 275 log.go:172] (0x40009e6bb0) Reply frame received for 3\nI1006 20:13:15.412095 275 log.go:172] (0x40009e6bb0) (0x40005095e0) Create stream\nI1006 20:13:15.412383 275 log.go:172] (0x40009e6bb0) (0x40005095e0) Stream added, broadcasting: 5\nI1006 20:13:15.415383 275 log.go:172] (0x40009e6bb0) Reply frame received for 5\nI1006 20:13:15.500189 275 log.go:172] (0x40009e6bb0) Data frame received for 5\nI1006 20:13:15.500434 275 log.go:172] (0x40005095e0) (5) Data frame handling\nI1006 20:13:15.500985 275 log.go:172] (0x40005095e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI1006 20:13:15.531645 275 log.go:172] (0x40009e6bb0) Data frame received for 3\nI1006 20:13:15.531927 275 log.go:172] (0x4000830000) (3) Data frame handling\nI1006 20:13:15.532160 275 log.go:172] (0x40009e6bb0) Data frame received for 5\nI1006 20:13:15.532337 275 log.go:172] (0x40005095e0) (5) Data frame handling\nI1006 20:13:15.532473 275 log.go:172] (0x4000830000) (3) Data frame sent\nI1006 20:13:15.532655 275 log.go:172] (0x40009e6bb0) Data frame received for 3\nI1006 20:13:15.532805 275 log.go:172] (0x4000830000) (3) Data frame handling\nI1006 20:13:15.533109 275 log.go:172] (0x40009e6bb0) Data frame received for 1\nI1006 20:13:15.533245 275 log.go:172] (0x400077a1e0) (1) Data frame handling\nI1006 20:13:15.533349 275 log.go:172] (0x400077a1e0) (1) Data frame sent\nI1006 20:13:15.536007 275 log.go:172] (0x40009e6bb0) (0x400077a1e0) Stream removed, broadcasting: 1\nI1006 20:13:15.538504 275 log.go:172] (0x40009e6bb0) Go away received\nI1006 20:13:15.546448 275 log.go:172] Streams opened: 2, map[spdy.StreamId]*spdystream.Stream{0x3:(*spdystream.Stream)(0x4000830000), 0x5:(*spdystream.Stream)(0x40005095e0)}\nI1006 20:13:15.546970 275 log.go:172] (0x40009e6bb0) (0x400077a1e0) Stream removed, broadcasting: 1\nI1006 20:13:15.547549 275 log.go:172] (0x40009e6bb0) (0x4000830000) Stream removed, broadcasting: 3\nI1006 20:13:15.548090 275 log.go:172] (0x40009e6bb0) (0x40005095e0) Stream removed, broadcasting: 5\n" Oct 6 20:13:15.558: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Oct 6 20:13:15.558: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Oct 6 20:13:15.558: INFO: Waiting for statefulset status.replicas updated to 0 Oct 6 20:13:15.565: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Oct 6 20:13:25.580: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Oct 6 20:13:25.581: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Oct 6 20:13:25.581: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Oct 6 20:13:25.598: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999986206s Oct 6 20:13:26.608: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.991345313s Oct 6 20:13:27.617: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.981373823s Oct 6 20:13:28.628: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.972086862s Oct 6 20:13:29.636: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.960502922s Oct 6 20:13:30.646: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.952787975s Oct 6 20:13:31.658: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.942598459s Oct 6 20:13:32.667: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.931306541s Oct 6 20:13:33.677: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.921695367s Oct 6 20:13:34.688: INFO: Verifying statefulset ss doesn't scale past 3 for another 911.82542ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-7600 Oct 6 20:13:35.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7600 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 6 20:13:37.133: INFO: stderr: "I1006 20:13:37.039970 298 log.go:172] (0x400011a2c0) (0x40006546e0) Create stream\nI1006 20:13:37.042837 298 log.go:172] (0x400011a2c0) (0x40006546e0) Stream added, broadcasting: 1\nI1006 20:13:37.056759 298 log.go:172] (0x400011a2c0) Reply frame received for 1\nI1006 20:13:37.057619 298 log.go:172] (0x400011a2c0) (0x40007b4000) Create stream\nI1006 20:13:37.057694 298 log.go:172] (0x400011a2c0) (0x40007b4000) Stream added, broadcasting: 3\nI1006 20:13:37.059561 298 log.go:172] (0x400011a2c0) Reply frame received for 3\nI1006 20:13:37.060129 298 log.go:172] (0x400011a2c0) (0x40007b8000) Create stream\nI1006 20:13:37.060241 298 log.go:172] (0x400011a2c0) (0x40007b8000) Stream added, broadcasting: 5\nI1006 20:13:37.062318 298 log.go:172] (0x400011a2c0) Reply frame received for 5\nI1006 20:13:37.114960 298 log.go:172] (0x400011a2c0) Data frame received for 5\nI1006 20:13:37.115193 298 log.go:172] (0x400011a2c0) Data frame received for 1\nI1006 20:13:37.115551 298 log.go:172] (0x40007b8000) (5) Data frame handling\nI1006 20:13:37.115916 298 log.go:172] (0x400011a2c0) Data frame received for 3\nI1006 20:13:37.116075 298 log.go:172] (0x40007b4000) (3) Data frame handling\nI1006 20:13:37.116350 298 log.go:172] (0x40006546e0) (1) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI1006 20:13:37.117952 298 log.go:172] (0x40007b8000) (5) Data frame sent\nI1006 20:13:37.118141 298 log.go:172] (0x40007b4000) (3) Data frame sent\nI1006 20:13:37.118322 298 log.go:172] (0x400011a2c0) Data frame received for 3\nI1006 20:13:37.118457 298 log.go:172] (0x40007b4000) (3) Data frame handling\nI1006 20:13:37.118738 298 log.go:172] (0x40006546e0) (1) Data frame sent\nI1006 20:13:37.119033 298 log.go:172] (0x400011a2c0) Data frame received for 5\nI1006 20:13:37.119176 298 log.go:172] (0x40007b8000) (5) Data frame handling\nI1006 20:13:37.121909 298 log.go:172] (0x400011a2c0) (0x40006546e0) Stream removed, broadcasting: 1\nI1006 20:13:37.123386 298 log.go:172] (0x400011a2c0) Go away received\nI1006 20:13:37.126911 298 log.go:172] (0x400011a2c0) (0x40006546e0) Stream removed, broadcasting: 1\nI1006 20:13:37.127184 298 log.go:172] (0x400011a2c0) (0x40007b4000) Stream removed, broadcasting: 3\nI1006 20:13:37.127374 298 log.go:172] (0x400011a2c0) (0x40007b8000) Stream removed, broadcasting: 5\n" Oct 6 20:13:37.134: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Oct 6 20:13:37.135: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Oct 6 20:13:37.135: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7600 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 6 20:13:38.757: INFO: stderr: "I1006 20:13:38.651996 321 log.go:172] (0x4000932bb0) (0x40006b81e0) Create stream\nI1006 20:13:38.656924 321 log.go:172] (0x4000932bb0) (0x40006b81e0) Stream added, broadcasting: 1\nI1006 20:13:38.671444 321 log.go:172] (0x4000932bb0) Reply frame received for 1\nI1006 20:13:38.672422 321 log.go:172] (0x4000932bb0) (0x40007efb80) Create stream\nI1006 20:13:38.672540 321 log.go:172] (0x4000932bb0) (0x40007efb80) Stream added, broadcasting: 3\nI1006 20:13:38.674320 321 log.go:172] (0x4000932bb0) Reply frame received for 3\nI1006 20:13:38.674642 321 log.go:172] (0x4000932bb0) (0x40007efc20) Create stream\nI1006 20:13:38.674716 321 log.go:172] (0x4000932bb0) (0x40007efc20) Stream added, broadcasting: 5\nI1006 20:13:38.676140 321 log.go:172] (0x4000932bb0) Reply frame received for 5\nI1006 20:13:38.735221 321 log.go:172] (0x4000932bb0) Data frame received for 1\nI1006 20:13:38.735658 321 log.go:172] (0x4000932bb0) Data frame received for 5\nI1006 20:13:38.735870 321 log.go:172] (0x40007efc20) (5) Data frame handling\nI1006 20:13:38.736208 321 log.go:172] (0x40006b81e0) (1) Data frame handling\nI1006 20:13:38.736488 321 log.go:172] (0x4000932bb0) Data frame received for 3\nI1006 20:13:38.736633 321 log.go:172] (0x40007efb80) (3) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI1006 20:13:38.738895 321 log.go:172] (0x40006b81e0) (1) Data frame sent\nI1006 20:13:38.739091 321 log.go:172] (0x40007efc20) (5) Data frame sent\nI1006 20:13:38.739305 321 log.go:172] (0x40007efb80) (3) Data frame sent\nI1006 20:13:38.739503 321 log.go:172] (0x4000932bb0) Data frame received for 5\nI1006 20:13:38.739593 321 log.go:172] (0x4000932bb0) Data frame received for 3\nI1006 20:13:38.739708 321 log.go:172] (0x40007efb80) (3) Data frame handling\nI1006 20:13:38.739815 321 log.go:172] (0x40007efc20) (5) Data frame handling\nI1006 20:13:38.741494 321 log.go:172] (0x4000932bb0) (0x40006b81e0) Stream removed, broadcasting: 1\nI1006 20:13:38.745514 321 log.go:172] (0x4000932bb0) Go away received\nI1006 20:13:38.747791 321 log.go:172] (0x4000932bb0) (0x40006b81e0) Stream removed, broadcasting: 1\nI1006 20:13:38.748458 321 log.go:172] (0x4000932bb0) (0x40007efb80) Stream removed, broadcasting: 3\nI1006 20:13:38.748722 321 log.go:172] (0x4000932bb0) (0x40007efc20) Stream removed, broadcasting: 5\n" Oct 6 20:13:38.758: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Oct 6 20:13:38.758: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Oct 6 20:13:38.759: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 6 20:13:40.113: INFO: rc: 1 Oct 6 20:13:40.115: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Oct 6 20:13:50.116: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 6 20:13:51.474: INFO: rc: 1 Oct 6 20:13:51.474: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Oct 6 20:14:01.475: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 6 20:14:02.745: INFO: rc: 1 Oct 6 20:14:02.745: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 6 20:14:12.746: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 6 20:14:13.993: INFO: rc: 1 Oct 6 20:14:13.994: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 6 20:14:23.995: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 6 20:14:25.221: INFO: rc: 1 Oct 6 20:14:25.222: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 6 20:14:35.223: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 6 20:14:36.453: INFO: rc: 1 Oct 6 20:14:36.454: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 6 20:14:46.454: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 6 20:14:47.658: INFO: rc: 1 Oct 6 20:14:47.658: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 6 20:14:57.658: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 6 20:14:58.934: INFO: rc: 1 Oct 6 20:14:58.935: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 6 20:15:08.935: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 6 20:15:10.159: INFO: rc: 1 Oct 6 20:15:10.160: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 6 20:15:20.160: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 6 20:15:21.393: INFO: rc: 1 Oct 6 20:15:21.393: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 6 20:15:31.394: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 6 20:15:32.622: INFO: rc: 1 Oct 6 20:15:32.623: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 6 20:15:42.623: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 6 20:15:43.872: INFO: rc: 1 Oct 6 20:15:43.872: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 6 20:15:53.873: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 6 20:15:55.129: INFO: rc: 1 Oct 6 20:15:55.130: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 6 20:16:05.131: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 6 20:16:06.349: INFO: rc: 1 Oct 6 20:16:06.349: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 6 20:16:16.350: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 6 20:16:17.595: INFO: rc: 1 Oct 6 20:16:17.595: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 6 20:16:27.596: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 6 20:16:28.847: INFO: rc: 1 Oct 6 20:16:28.848: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 6 20:16:38.849: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 6 20:16:42.894: INFO: rc: 1 Oct 6 20:16:42.895: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 6 20:16:52.895: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 6 20:16:54.119: INFO: rc: 1 Oct 6 20:16:54.119: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 6 20:17:04.120: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 6 20:17:05.345: INFO: rc: 1 Oct 6 20:17:05.346: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 6 20:17:15.347: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 6 20:17:16.552: INFO: rc: 1 Oct 6 20:17:16.552: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 6 20:17:26.553: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 6 20:17:27.795: INFO: rc: 1 Oct 6 20:17:27.796: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 6 20:17:37.796: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 6 20:17:38.991: INFO: rc: 1 Oct 6 20:17:38.991: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 6 20:17:48.992: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 6 20:17:50.229: INFO: rc: 1 Oct 6 20:17:50.229: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 6 20:18:00.230: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 6 20:18:01.494: INFO: rc: 1 Oct 6 20:18:01.494: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 6 20:18:11.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 6 20:18:12.761: INFO: rc: 1 Oct 6 20:18:12.761: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 6 20:18:22.762: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 6 20:18:24.885: INFO: rc: 1 Oct 6 20:18:24.886: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 6 20:18:34.887: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 6 20:18:36.120: INFO: rc: 1 Oct 6 20:18:36.121: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 6 20:18:46.121: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7600 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 6 20:18:47.368: INFO: rc: 1 Oct 6 20:18:47.369: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: Oct 6 20:18:47.369: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Oct 6 20:18:47.391: INFO: Deleting all statefulset in ns statefulset-7600 Oct 6 20:18:47.395: INFO: Scaling statefulset ss to 0 Oct 6 20:18:47.410: INFO: Waiting for statefulset status.replicas updated to 0 Oct 6 20:18:47.414: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Oct 6 20:18:47.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7600" for this suite. • [SLOW TEST:379.709 seconds] [sig-apps] StatefulSet /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":278,"completed":32,"skipped":393,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Oct 6 20:18:47.466: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-607b97a0-2cf2-4ab9-8763-e0e6bb0f0691 STEP: Creating a pod to test consume secrets Oct 6 20:18:47.593: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-3bb95482-2cbf-4ee2-bce2-f212bee83074" in namespace "projected-3587" to be "success or failure" Oct 6 20:18:47.598: INFO: Pod "pod-projected-secrets-3bb95482-2cbf-4ee2-bce2-f212bee83074": Phase="Pending", Reason="", readiness=false. Elapsed: 4.802038ms Oct 6 20:18:49.605: INFO: Pod "pod-projected-secrets-3bb95482-2cbf-4ee2-bce2-f212bee83074": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011852722s Oct 6 20:18:51.612: INFO: Pod "pod-projected-secrets-3bb95482-2cbf-4ee2-bce2-f212bee83074": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018604615s STEP: Saw pod success Oct 6 20:18:51.612: INFO: Pod "pod-projected-secrets-3bb95482-2cbf-4ee2-bce2-f212bee83074" satisfied condition "success or failure" Oct 6 20:18:51.618: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-3bb95482-2cbf-4ee2-bce2-f212bee83074 container projected-secret-volume-test: STEP: delete the pod Oct 6 20:18:51.741: INFO: Waiting for pod pod-projected-secrets-3bb95482-2cbf-4ee2-bce2-f212bee83074 to disappear Oct 6 20:18:51.754: INFO: Pod pod-projected-secrets-3bb95482-2cbf-4ee2-bce2-f212bee83074 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Oct 6 20:18:51.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3587" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":33,"skipped":409,"failed":0} SSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Oct 6 20:18:51.769: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-rnd7 STEP: Creating a pod to test atomic-volume-subpath Oct 6 20:18:51.909: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-rnd7" in namespace "subpath-5758" to be "success or failure" Oct 6 20:18:51.920: INFO: Pod "pod-subpath-test-configmap-rnd7": Phase="Pending", Reason="", readiness=false. Elapsed: 10.260824ms Oct 6 20:18:53.973: INFO: Pod "pod-subpath-test-configmap-rnd7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064210979s Oct 6 20:18:55.980: INFO: Pod "pod-subpath-test-configmap-rnd7": Phase="Running", Reason="", readiness=true. Elapsed: 4.070786357s Oct 6 20:18:57.988: INFO: Pod "pod-subpath-test-configmap-rnd7": Phase="Running", Reason="", readiness=true. Elapsed: 6.078307289s Oct 6 20:18:59.996: INFO: Pod "pod-subpath-test-configmap-rnd7": Phase="Running", Reason="", readiness=true. Elapsed: 8.086617686s Oct 6 20:19:02.005: INFO: Pod "pod-subpath-test-configmap-rnd7": Phase="Running", Reason="", readiness=true. Elapsed: 10.096129145s Oct 6 20:19:04.012: INFO: Pod "pod-subpath-test-configmap-rnd7": Phase="Running", Reason="", readiness=true. Elapsed: 12.102862488s Oct 6 20:19:06.019: INFO: Pod "pod-subpath-test-configmap-rnd7": Phase="Running", Reason="", readiness=true. Elapsed: 14.109399899s Oct 6 20:19:08.026: INFO: Pod "pod-subpath-test-configmap-rnd7": Phase="Running", Reason="", readiness=true. Elapsed: 16.116394463s Oct 6 20:19:10.033: INFO: Pod "pod-subpath-test-configmap-rnd7": Phase="Running", Reason="", readiness=true. Elapsed: 18.123668611s Oct 6 20:19:12.060: INFO: Pod "pod-subpath-test-configmap-rnd7": Phase="Running", Reason="", readiness=true. Elapsed: 20.150276746s Oct 6 20:19:14.067: INFO: Pod "pod-subpath-test-configmap-rnd7": Phase="Running", Reason="", readiness=true. Elapsed: 22.157431976s Oct 6 20:19:16.073: INFO: Pod "pod-subpath-test-configmap-rnd7": Phase="Running", Reason="", readiness=true. Elapsed: 24.163734135s Oct 6 20:19:18.084: INFO: Pod "pod-subpath-test-configmap-rnd7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.17467876s STEP: Saw pod success Oct 6 20:19:18.084: INFO: Pod "pod-subpath-test-configmap-rnd7" satisfied condition "success or failure" Oct 6 20:19:18.088: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-configmap-rnd7 container test-container-subpath-configmap-rnd7: STEP: delete the pod Oct 6 20:19:18.109: INFO: Waiting for pod pod-subpath-test-configmap-rnd7 to disappear Oct 6 20:19:18.113: INFO: Pod pod-subpath-test-configmap-rnd7 no longer exists STEP: Deleting pod pod-subpath-test-configmap-rnd7 Oct 6 20:19:18.114: INFO: Deleting pod "pod-subpath-test-configmap-rnd7" in namespace "subpath-5758" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Oct 6 20:19:18.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5758" for this suite. • [SLOW TEST:26.362 seconds] [sig-storage] Subpath /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":278,"completed":34,"skipped":415,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Oct 6 20:19:18.132: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Oct 6 20:19:18.206: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Oct 6 20:19:22.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-6993" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":278,"completed":35,"skipped":422,"failed":0} SSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Oct 6 20:19:22.963: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 6 20:19:26.581: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 6 20:19:28.610: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737612366, loc:(*time.Location)(0x7271fa0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737612366, loc:(*time.Location)(0x7271fa0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737612366, loc:(*time.Location)(0x7271fa0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737612366, loc:(*time.Location)(0x7271fa0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 6 20:19:30.653: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737612366, loc:(*time.Location)(0x7271fa0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737612366, loc:(*time.Location)(0x7271fa0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737612366, loc:(*time.Location)(0x7271fa0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737612366, loc:(*time.Location)(0x7271fa0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 6 20:19:33.646: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Oct 6 20:19:33.652: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-2475-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Oct 6 20:19:34.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2112" for this suite. STEP: Destroying namespace "webhook-2112-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:12.006 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":278,"completed":36,"skipped":426,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Oct 6 20:19:34.970: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name cm-test-opt-del-d6a302dd-a19c-4ffc-b3b9-ef4ae9ae515b STEP: Creating configMap with name cm-test-opt-upd-100f049d-3ba9-4c2c-bf50-e17ed1f2e0c7 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-d6a302dd-a19c-4ffc-b3b9-ef4ae9ae515b STEP: Updating configmap cm-test-opt-upd-100f049d-3ba9-4c2c-bf50-e17ed1f2e0c7 STEP: Creating configMap with name cm-test-opt-create-4d08e620-431b-49d3-aced-904364892aa4 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Oct 6 20:21:15.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4040" for this suite. • [SLOW TEST:100.817 seconds] [sig-storage] Projected configMap /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":37,"skipped":440,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected combined /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Oct 6 20:21:15.791: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-projected-all-test-volume-608d5de4-7a9e-474c-a859-b8080c349ccc STEP: Creating secret with name secret-projected-all-test-volume-2365b4fa-c41b-466a-83b3-562b57b4c342 STEP: Creating a pod to test Check all projections for projected volume plugin Oct 6 20:21:15.890: INFO: Waiting up to 5m0s for pod "projected-volume-ff1e4965-79e9-450a-818a-7a1853fe9c18" in namespace "projected-1433" to be "success or failure" Oct 6 20:21:15.924: INFO: Pod "projected-volume-ff1e4965-79e9-450a-818a-7a1853fe9c18": Phase="Pending", Reason="", readiness=false. Elapsed: 33.539011ms Oct 6 20:21:17.930: INFO: Pod "projected-volume-ff1e4965-79e9-450a-818a-7a1853fe9c18": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040088774s Oct 6 20:21:19.942: INFO: Pod "projected-volume-ff1e4965-79e9-450a-818a-7a1853fe9c18": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.051857306s STEP: Saw pod success Oct 6 20:21:19.942: INFO: Pod "projected-volume-ff1e4965-79e9-450a-818a-7a1853fe9c18" satisfied condition "success or failure" Oct 6 20:21:19.947: INFO: Trying to get logs from node jerma-worker pod projected-volume-ff1e4965-79e9-450a-818a-7a1853fe9c18 container projected-all-volume-test: STEP: delete the pod Oct 6 20:21:20.123: INFO: Waiting for pod projected-volume-ff1e4965-79e9-450a-818a-7a1853fe9c18 to disappear Oct 6 20:21:20.135: INFO: Pod projected-volume-ff1e4965-79e9-450a-818a-7a1853fe9c18 no longer exists [AfterEach] [sig-storage] Projected combined /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Oct 6 20:21:20.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1433" for this suite. •{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":278,"completed":38,"skipped":469,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Oct 6 20:21:20.149: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-44b9ceca-fcd1-410b-8c92-39c3c23c28c6 STEP: Creating a pod to test consume secrets Oct 6 20:21:20.275: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-2191f6b7-9f20-4670-bc23-35a1cfd1d083" in namespace "projected-2762" to be "success or failure" Oct 6 20:21:20.303: INFO: Pod "pod-projected-secrets-2191f6b7-9f20-4670-bc23-35a1cfd1d083": Phase="Pending", Reason="", readiness=false. Elapsed: 27.789623ms Oct 6 20:21:22.326: INFO: Pod "pod-projected-secrets-2191f6b7-9f20-4670-bc23-35a1cfd1d083": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050486086s Oct 6 20:21:24.332: INFO: Pod "pod-projected-secrets-2191f6b7-9f20-4670-bc23-35a1cfd1d083": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.056653961s STEP: Saw pod success Oct 6 20:21:24.332: INFO: Pod "pod-projected-secrets-2191f6b7-9f20-4670-bc23-35a1cfd1d083" satisfied condition "success or failure" Oct 6 20:21:24.337: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-2191f6b7-9f20-4670-bc23-35a1cfd1d083 container projected-secret-volume-test: STEP: delete the pod Oct 6 20:21:24.453: INFO: Waiting for pod pod-projected-secrets-2191f6b7-9f20-4670-bc23-35a1cfd1d083 to disappear Oct 6 20:21:24.468: INFO: Pod pod-projected-secrets-2191f6b7-9f20-4670-bc23-35a1cfd1d083 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Oct 6 20:21:24.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2762" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":39,"skipped":477,"failed":0} SSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Oct 6 20:21:24.485: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Oct 6 20:21:24.549: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-b6e196f6-e78a-48da-ad3e-f9219f8b88d6" in namespace "security-context-test-3805" to be "success or failure" Oct 6 20:21:24.597: INFO: Pod "busybox-privileged-false-b6e196f6-e78a-48da-ad3e-f9219f8b88d6": Phase="Pending", Reason="", readiness=false. Elapsed: 48.190276ms Oct 6 20:21:26.604: INFO: Pod "busybox-privileged-false-b6e196f6-e78a-48da-ad3e-f9219f8b88d6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055338397s Oct 6 20:21:28.612: INFO: Pod "busybox-privileged-false-b6e196f6-e78a-48da-ad3e-f9219f8b88d6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.062934693s Oct 6 20:21:28.612: INFO: Pod "busybox-privileged-false-b6e196f6-e78a-48da-ad3e-f9219f8b88d6" satisfied condition "success or failure" Oct 6 20:21:28.621: INFO: Got logs for pod "busybox-privileged-false-b6e196f6-e78a-48da-ad3e-f9219f8b88d6": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Oct 6 20:21:28.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-3805" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":40,"skipped":484,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Oct 6 20:21:28.636: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 6 20:21:31.306: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 6 20:21:33.324: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737612491, loc:(*time.Location)(0x7271fa0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737612491, loc:(*time.Location)(0x7271fa0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737612491, loc:(*time.Location)(0x7271fa0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737612491, loc:(*time.Location)(0x7271fa0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 6 20:21:36.364: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Oct 6 20:21:36.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9796" for this suite. STEP: Destroying namespace "webhook-9796-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.345 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":278,"completed":41,"skipped":490,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Service endpoints latency /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Oct 6 20:21:36.984: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Oct 6 20:21:37.045: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-7404 I1006 20:21:37.121886 7 runners.go:189] Created replication controller with name: svc-latency-rc, namespace: svc-latency-7404, replica count: 1 I1006 20:21:38.175423 7 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1006 20:21:39.177205 7 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1006 20:21:40.178837 7 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Oct 6 20:21:40.312: INFO: Created: latency-svc-6jthq Oct 6 20:21:40.336: INFO: Got endpoints: latency-svc-6jthq [51.937125ms] Oct 6 20:21:40.409: INFO: Created: latency-svc-77qww Oct 6 20:21:40.464: INFO: Created: latency-svc-88h5g Oct 6 20:21:40.464: INFO: Got endpoints: latency-svc-77qww [127.649581ms] Oct 6 20:21:40.475: INFO: Got endpoints: latency-svc-88h5g [138.005846ms] Oct 6 20:21:40.493: INFO: Created: latency-svc-2grdn Oct 6 20:21:40.542: INFO: Got endpoints: latency-svc-2grdn [205.364388ms] Oct 6 20:21:40.552: INFO: Created: latency-svc-7wmr9 Oct 6 20:21:40.572: INFO: Got endpoints: latency-svc-7wmr9 [234.363415ms] Oct 6 20:21:40.587: INFO: Created: latency-svc-v5fsr Oct 6 20:21:40.609: INFO: Got endpoints: latency-svc-v5fsr [271.471336ms] Oct 6 20:21:40.633: INFO: Created: latency-svc-r927p Oct 6 20:21:40.654: INFO: Got endpoints: latency-svc-r927p [317.588534ms] Oct 6 20:21:40.673: INFO: Created: latency-svc-lfnmt Oct 6 20:21:40.688: INFO: Got endpoints: latency-svc-lfnmt [350.862434ms] Oct 6 20:21:40.710: INFO: Created: latency-svc-vdgv6 Oct 6 20:21:40.722: INFO: Got endpoints: latency-svc-vdgv6 [384.913913ms] Oct 6 20:21:40.751: INFO: Created: latency-svc-csfn8 Oct 6 20:21:40.810: INFO: Got endpoints: latency-svc-csfn8 [473.184683ms] Oct 6 20:21:40.839: INFO: Created: latency-svc-krvv9 Oct 6 20:21:40.863: INFO: Got endpoints: latency-svc-krvv9 [525.205834ms] Oct 6 20:21:40.893: INFO: Created: latency-svc-pplsn Oct 6 20:21:40.905: INFO: Got endpoints: latency-svc-pplsn [567.330486ms] Oct 6 20:21:40.960: INFO: Created: latency-svc-lf5rc Oct 6 20:21:40.963: INFO: Got endpoints: latency-svc-lf5rc [624.76469ms] Oct 6 20:21:40.992: INFO: Created: latency-svc-kv8tq Oct 6 20:21:41.002: INFO: Got endpoints: latency-svc-kv8tq [663.796064ms] Oct 6 20:21:41.019: INFO: Created: latency-svc-s9plq Oct 6 20:21:41.034: INFO: Got endpoints: latency-svc-s9plq [697.045109ms] Oct 6 20:21:41.049: INFO: Created: latency-svc-lnx2l Oct 6 20:21:41.104: INFO: Got endpoints: latency-svc-lnx2l [764.391097ms] Oct 6 20:21:41.106: INFO: Created: latency-svc-qkfkp Oct 6 20:21:41.111: INFO: Got endpoints: latency-svc-qkfkp [646.751873ms] Oct 6 20:21:41.141: INFO: Created: latency-svc-6bzgt Oct 6 20:21:41.154: INFO: Got endpoints: latency-svc-6bzgt [678.039355ms] Oct 6 20:21:41.171: INFO: Created: latency-svc-nf5lj Oct 6 20:21:41.196: INFO: Got endpoints: latency-svc-nf5lj [653.303009ms] Oct 6 20:21:41.260: INFO: Created: latency-svc-hc2p7 Oct 6 20:21:41.269: INFO: Got endpoints: latency-svc-hc2p7 [696.93041ms] Oct 6 20:21:41.297: INFO: Created: latency-svc-s9fqf Oct 6 20:21:41.305: INFO: Got endpoints: latency-svc-s9fqf [695.435912ms] Oct 6 20:21:41.325: INFO: Created: latency-svc-s5qj8 Oct 6 20:21:41.357: INFO: Got endpoints: latency-svc-s5qj8 [702.882552ms] Oct 6 20:21:41.440: INFO: Created: latency-svc-gmbxr Oct 6 20:21:41.444: INFO: Got endpoints: latency-svc-gmbxr [756.175294ms] Oct 6 20:21:41.499: INFO: Created: latency-svc-cbbms Oct 6 20:21:41.513: INFO: Got endpoints: latency-svc-cbbms [789.862295ms] Oct 6 20:21:41.535: INFO: Created: latency-svc-2tsk9 Oct 6 20:21:41.583: INFO: Got endpoints: latency-svc-2tsk9 [772.433525ms] Oct 6 20:21:41.597: INFO: Created: latency-svc-znn2k Oct 6 20:21:41.609: INFO: Got endpoints: latency-svc-znn2k [746.37922ms] Oct 6 20:21:41.628: INFO: Created: latency-svc-nhn5c Oct 6 20:21:41.640: INFO: Got endpoints: latency-svc-nhn5c [734.658904ms] Oct 6 20:21:41.658: INFO: Created: latency-svc-fgf84 Oct 6 20:21:41.670: INFO: Got endpoints: latency-svc-fgf84 [707.435422ms] Oct 6 20:21:41.715: INFO: Created: latency-svc-b7zp4 Oct 6 20:21:41.724: INFO: Got endpoints: latency-svc-b7zp4 [721.741739ms] Oct 6 20:21:41.746: INFO: Created: latency-svc-5f8j2 Oct 6 20:21:41.761: INFO: Got endpoints: latency-svc-5f8j2 [726.606413ms] Oct 6 20:21:41.781: INFO: Created: latency-svc-4jngf Oct 6 20:21:41.863: INFO: Got endpoints: latency-svc-4jngf [758.633214ms] Oct 6 20:21:41.885: INFO: Created: latency-svc-brtj4 Oct 6 20:21:41.898: INFO: Got endpoints: latency-svc-brtj4 [786.679284ms] Oct 6 20:21:41.948: INFO: Created: latency-svc-5s84j Oct 6 20:21:41.991: INFO: Got endpoints: latency-svc-5s84j [837.418355ms] Oct 6 20:21:42.006: INFO: Created: latency-svc-v7m47 Oct 6 20:21:42.017: INFO: Got endpoints: latency-svc-v7m47 [820.629743ms] Oct 6 20:21:42.036: INFO: Created: latency-svc-nh8vp Oct 6 20:21:42.046: INFO: Got endpoints: latency-svc-nh8vp [776.067607ms] Oct 6 20:21:42.066: INFO: Created: latency-svc-4lp75 Oct 6 20:21:42.077: INFO: Got endpoints: latency-svc-4lp75 [772.445847ms] Oct 6 20:21:42.141: INFO: Created: latency-svc-k9bwc Oct 6 20:21:42.167: INFO: Got endpoints: latency-svc-k9bwc [809.233365ms] Oct 6 20:21:42.207: INFO: Created: latency-svc-tc8nx Oct 6 20:21:42.248: INFO: Got endpoints: latency-svc-tc8nx [803.224433ms] Oct 6 20:21:42.275: INFO: Created: latency-svc-plsrh Oct 6 20:21:42.299: INFO: Got endpoints: latency-svc-plsrh [786.168097ms] Oct 6 20:21:42.453: INFO: Created: latency-svc-9fjjt Oct 6 20:21:42.455: INFO: Got endpoints: latency-svc-9fjjt [871.591047ms] Oct 6 20:21:42.498: INFO: Created: latency-svc-gmhqj Oct 6 20:21:42.511: INFO: Got endpoints: latency-svc-gmhqj [901.937221ms] Oct 6 20:21:42.597: INFO: Created: latency-svc-jmgcn Oct 6 20:21:42.618: INFO: Got endpoints: latency-svc-jmgcn [977.763094ms] Oct 6 20:21:42.639: INFO: Created: latency-svc-ngpql Oct 6 20:21:42.656: INFO: Got endpoints: latency-svc-ngpql [985.509596ms] Oct 6 20:21:42.739: INFO: Created: latency-svc-ncpww Oct 6 20:21:42.742: INFO: Got endpoints: latency-svc-ncpww [1.017642905s] Oct 6 20:21:42.783: INFO: Created: latency-svc-txh6c Oct 6 20:21:42.796: INFO: Got endpoints: latency-svc-txh6c [1.034404513s] Oct 6 20:21:42.824: INFO: Created: latency-svc-tqtnz Oct 6 20:21:42.838: INFO: Got endpoints: latency-svc-tqtnz [974.347774ms] Oct 6 20:21:42.902: INFO: Created: latency-svc-nl978 Oct 6 20:21:42.903: INFO: Got endpoints: latency-svc-nl978 [1.004689566s] Oct 6 20:21:42.959: INFO: Created: latency-svc-5bt67 Oct 6 20:21:42.977: INFO: Got endpoints: latency-svc-5bt67 [985.90166ms] Oct 6 20:21:43.039: INFO: Created: latency-svc-fh5l8 Oct 6 20:21:43.042: INFO: Got endpoints: latency-svc-fh5l8 [1.025161899s] Oct 6 20:21:43.083: INFO: Created: latency-svc-tcmt7 Oct 6 20:21:43.097: INFO: Got endpoints: latency-svc-tcmt7 [1.050768168s] Oct 6 20:21:43.115: INFO: Created: latency-svc-kfckr Oct 6 20:21:43.127: INFO: Got endpoints: latency-svc-kfckr [1.049803345s] Oct 6 20:21:43.183: INFO: Created: latency-svc-4hr7j Oct 6 20:21:43.185: INFO: Got endpoints: latency-svc-4hr7j [1.018006058s] Oct 6 20:21:43.221: INFO: Created: latency-svc-scgxv Oct 6 20:21:43.226: INFO: Got endpoints: latency-svc-scgxv [978.646292ms] Oct 6 20:21:43.262: INFO: Created: latency-svc-hmphf Oct 6 20:21:43.275: INFO: Got endpoints: latency-svc-hmphf [975.639734ms] Oct 6 20:21:43.325: INFO: Created: latency-svc-hskzg Oct 6 20:21:43.342: INFO: Got endpoints: latency-svc-hskzg [886.778543ms] Oct 6 20:21:43.380: INFO: Created: latency-svc-6jm45 Oct 6 20:21:43.465: INFO: Got endpoints: latency-svc-6jm45 [952.950997ms] Oct 6 20:21:43.527: INFO: Created: latency-svc-x8cvt Oct 6 20:21:43.545: INFO: Got endpoints: latency-svc-x8cvt [927.109079ms] Oct 6 20:21:43.619: INFO: Created: latency-svc-m89c7 Oct 6 20:21:43.635: INFO: Got endpoints: latency-svc-m89c7 [979.273738ms] Oct 6 20:21:43.676: INFO: Created: latency-svc-qbccn Oct 6 20:21:43.690: INFO: Got endpoints: latency-svc-qbccn [947.55142ms] Oct 6 20:21:43.757: INFO: Created: latency-svc-rh2cl Oct 6 20:21:43.767: INFO: Got endpoints: latency-svc-rh2cl [971.226127ms] Oct 6 20:21:43.781: INFO: Created: latency-svc-scxln Oct 6 20:21:43.792: INFO: Got endpoints: latency-svc-scxln [953.588075ms] Oct 6 20:21:43.900: INFO: Created: latency-svc-kzshq Oct 6 20:21:43.903: INFO: Got endpoints: latency-svc-kzshq [999.243771ms] Oct 6 20:21:43.971: INFO: Created: latency-svc-89g62 Oct 6 20:21:43.994: INFO: Got endpoints: latency-svc-89g62 [1.016315097s] Oct 6 20:21:44.050: INFO: Created: latency-svc-sndff Oct 6 20:21:44.054: INFO: Got endpoints: latency-svc-sndff [1.011739664s] Oct 6 20:21:44.195: INFO: Created: latency-svc-qtlbx Oct 6 20:21:44.223: INFO: Got endpoints: latency-svc-qtlbx [1.126343248s] Oct 6 20:21:44.223: INFO: Created: latency-svc-62l2p Oct 6 20:21:44.234: INFO: Got endpoints: latency-svc-62l2p [1.106259776s] Oct 6 20:21:44.267: INFO: Created: latency-svc-mcb6x Oct 6 20:21:44.283: INFO: Got endpoints: latency-svc-mcb6x [1.097552485s] Oct 6 20:21:44.338: INFO: Created: latency-svc-5fjc8 Oct 6 20:21:44.342: INFO: Got endpoints: latency-svc-5fjc8 [1.115296436s] Oct 6 20:21:44.385: INFO: Created: latency-svc-qm74x Oct 6 20:21:44.432: INFO: Got endpoints: latency-svc-qm74x [1.157284928s] Oct 6 20:21:44.511: INFO: Created: latency-svc-jcrmc Oct 6 20:21:44.515: INFO: Got endpoints: latency-svc-jcrmc [1.172365588s] Oct 6 20:21:44.555: INFO: Created: latency-svc-6wb6l Oct 6 20:21:44.565: INFO: Got endpoints: latency-svc-6wb6l [1.100099846s] Oct 6 20:21:44.588: INFO: Created: latency-svc-8n5fd Oct 6 20:21:44.599: INFO: Got endpoints: latency-svc-8n5fd [1.053923034s] Oct 6 20:21:44.643: INFO: Created: latency-svc-jbsmq Oct 6 20:21:44.670: INFO: Got endpoints: latency-svc-jbsmq [1.034771637s] Oct 6 20:21:44.697: INFO: Created: latency-svc-ftlcr Oct 6 20:21:44.706: INFO: Got endpoints: latency-svc-ftlcr [1.016752923s] Oct 6 20:21:44.723: INFO: Created: latency-svc-ck78m Oct 6 20:21:44.769: INFO: Got endpoints: latency-svc-ck78m [1.001674959s] Oct 6 20:21:44.789: INFO: Created: latency-svc-t2ljn Oct 6 20:21:44.804: INFO: Got endpoints: latency-svc-t2ljn [1.011744159s] Oct 6 20:21:44.828: INFO: Created: latency-svc-q287t Oct 6 20:21:44.853: INFO: Got endpoints: latency-svc-q287t [949.717758ms] Oct 6 20:21:44.900: INFO: Created: latency-svc-5x7xt Oct 6 20:21:44.906: INFO: Got endpoints: latency-svc-5x7xt [912.125378ms] Oct 6 20:21:44.921: INFO: Created: latency-svc-pm4gz Oct 6 20:21:44.936: INFO: Got endpoints: latency-svc-pm4gz [882.029364ms] Oct 6 20:21:44.957: INFO: Created: latency-svc-8tbwx Oct 6 20:21:44.972: INFO: Got endpoints: latency-svc-8tbwx [748.967298ms] Oct 6 20:21:44.999: INFO: Created: latency-svc-hzqwp Oct 6 20:21:45.063: INFO: Got endpoints: latency-svc-hzqwp [828.719042ms] Oct 6 20:21:45.064: INFO: Created: latency-svc-zf497 Oct 6 20:21:45.069: INFO: Got endpoints: latency-svc-zf497 [785.727539ms] Oct 6 20:21:45.092: INFO: Created: latency-svc-h6n96 Oct 6 20:21:45.106: INFO: Got endpoints: latency-svc-h6n96 [763.500558ms] Oct 6 20:21:45.123: INFO: Created: latency-svc-2ndh2 Oct 6 20:21:45.149: INFO: Got endpoints: latency-svc-2ndh2 [717.130336ms] Oct 6 20:21:45.248: INFO: Created: latency-svc-hchck Oct 6 20:21:45.261: INFO: Got endpoints: latency-svc-hchck [745.731853ms] Oct 6 20:21:45.284: INFO: Created: latency-svc-5ww88 Oct 6 20:21:45.298: INFO: Got endpoints: latency-svc-5ww88 [732.791692ms] Oct 6 20:21:45.323: INFO: Created: latency-svc-dtvw6 Oct 6 20:21:45.334: INFO: Got endpoints: latency-svc-dtvw6 [734.776263ms] Oct 6 20:21:45.398: INFO: Created: latency-svc-8tdxn Oct 6 20:21:45.426: INFO: Got endpoints: latency-svc-8tdxn [755.410678ms] Oct 6 20:21:45.461: INFO: Created: latency-svc-kbdmz Oct 6 20:21:45.473: INFO: Got endpoints: latency-svc-kbdmz [766.190813ms] Oct 6 20:21:45.547: INFO: Created: latency-svc-tsntj Oct 6 20:21:45.549: INFO: Got endpoints: latency-svc-tsntj [779.925088ms] Oct 6 20:21:45.581: INFO: Created: latency-svc-7g4ck Oct 6 20:21:45.594: INFO: Got endpoints: latency-svc-7g4ck [789.878813ms] Oct 6 20:21:45.617: INFO: Created: latency-svc-94s7s Oct 6 20:21:45.630: INFO: Got endpoints: latency-svc-94s7s [776.794747ms] Oct 6 20:21:45.685: INFO: Created: latency-svc-2t445 Oct 6 20:21:45.710: INFO: Created: latency-svc-n862x Oct 6 20:21:45.710: INFO: Got endpoints: latency-svc-2t445 [804.120709ms] Oct 6 20:21:45.728: INFO: Got endpoints: latency-svc-n862x [791.622137ms] Oct 6 20:21:45.752: INFO: Created: latency-svc-cw4z9 Oct 6 20:21:45.762: INFO: Got endpoints: latency-svc-cw4z9 [789.882973ms] Oct 6 20:21:45.782: INFO: Created: latency-svc-kv275 Oct 6 20:21:45.925: INFO: Got endpoints: latency-svc-kv275 [861.965078ms] Oct 6 20:21:45.926: INFO: Created: latency-svc-kbfb8 Oct 6 20:21:45.932: INFO: Got endpoints: latency-svc-kbfb8 [863.009033ms] Oct 6 20:21:46.175: INFO: Created: latency-svc-vvgjg Oct 6 20:21:46.183: INFO: Got endpoints: latency-svc-vvgjg [1.077721591s] Oct 6 20:21:46.220: INFO: Created: latency-svc-p9x75 Oct 6 20:21:46.234: INFO: Got endpoints: latency-svc-p9x75 [1.084160345s] Oct 6 20:21:46.250: INFO: Created: latency-svc-dxn6p Oct 6 20:21:46.262: INFO: Got endpoints: latency-svc-dxn6p [1.000791541s] Oct 6 20:21:46.313: INFO: Created: latency-svc-5t96t Oct 6 20:21:46.338: INFO: Got endpoints: latency-svc-5t96t [1.04009658s] Oct 6 20:21:46.380: INFO: Created: latency-svc-8dmlz Oct 6 20:21:46.412: INFO: Got endpoints: latency-svc-8dmlz [1.077567496s] Oct 6 20:21:46.459: INFO: Created: latency-svc-xc274 Oct 6 20:21:46.473: INFO: Got endpoints: latency-svc-xc274 [1.046915499s] Oct 6 20:21:46.549: INFO: Created: latency-svc-29kwk Oct 6 20:21:46.594: INFO: Got endpoints: latency-svc-29kwk [1.121276864s] Oct 6 20:21:46.604: INFO: Created: latency-svc-bmjqb Oct 6 20:21:46.617: INFO: Got endpoints: latency-svc-bmjqb [1.067799539s] Oct 6 20:21:46.634: INFO: Created: latency-svc-p9bg2 Oct 6 20:21:46.648: INFO: Got endpoints: latency-svc-p9bg2 [1.053901101s] Oct 6 20:21:46.677: INFO: Created: latency-svc-p9xtp Oct 6 20:21:46.726: INFO: Got endpoints: latency-svc-p9xtp [1.096486676s] Oct 6 20:21:46.733: INFO: Created: latency-svc-v6tqf Oct 6 20:21:46.750: INFO: Got endpoints: latency-svc-v6tqf [1.039027621s] Oct 6 20:21:46.770: INFO: Created: latency-svc-m9t86 Oct 6 20:21:46.823: INFO: Got endpoints: latency-svc-m9t86 [1.09475833s] Oct 6 20:21:46.893: INFO: Created: latency-svc-4mqct Oct 6 20:21:46.927: INFO: Got endpoints: latency-svc-4mqct [1.164105784s] Oct 6 20:21:46.974: INFO: Created: latency-svc-lkrh8 Oct 6 20:21:47.003: INFO: Got endpoints: latency-svc-lkrh8 [1.077338184s] Oct 6 20:21:47.009: INFO: Created: latency-svc-xw8mj Oct 6 20:21:47.028: INFO: Got endpoints: latency-svc-xw8mj [1.095556764s] Oct 6 20:21:47.054: INFO: Created: latency-svc-k6tjw Oct 6 20:21:47.069: INFO: Got endpoints: latency-svc-k6tjw [885.099665ms] Oct 6 20:21:47.096: INFO: Created: latency-svc-sskwp Oct 6 20:21:47.164: INFO: Got endpoints: latency-svc-sskwp [930.213308ms] Oct 6 20:21:47.165: INFO: Created: latency-svc-4dj9n Oct 6 20:21:47.171: INFO: Got endpoints: latency-svc-4dj9n [908.861295ms] Oct 6 20:21:47.189: INFO: Created: latency-svc-w67k6 Oct 6 20:21:47.203: INFO: Got endpoints: latency-svc-w67k6 [864.338767ms] Oct 6 20:21:47.219: INFO: Created: latency-svc-29kfx Oct 6 20:21:47.238: INFO: Got endpoints: latency-svc-29kfx [826.188387ms] Oct 6 20:21:47.301: INFO: Created: latency-svc-2cm4b Oct 6 20:21:47.306: INFO: Got endpoints: latency-svc-2cm4b [833.190512ms] Oct 6 20:21:47.400: INFO: Created: latency-svc-kqlmd Oct 6 20:21:47.451: INFO: Got endpoints: latency-svc-kqlmd [856.471268ms] Oct 6 20:21:47.453: INFO: Created: latency-svc-mn5h2 Oct 6 20:21:47.475: INFO: Got endpoints: latency-svc-mn5h2 [857.47376ms] Oct 6 20:21:47.506: INFO: Created: latency-svc-28pxc Oct 6 20:21:47.534: INFO: Got endpoints: latency-svc-28pxc [885.965205ms] Oct 6 20:21:47.536: INFO: Created: latency-svc-khrt6 Oct 6 20:21:47.545: INFO: Got endpoints: latency-svc-khrt6 [818.835186ms] Oct 6 20:21:47.607: INFO: Created: latency-svc-fstsr Oct 6 20:21:47.611: INFO: Got endpoints: latency-svc-fstsr [861.597337ms] Oct 6 20:21:47.627: INFO: Created: latency-svc-tlchc Oct 6 20:21:47.636: INFO: Got endpoints: latency-svc-tlchc [812.456782ms] Oct 6 20:21:47.648: INFO: Created: latency-svc-vpl7j Oct 6 20:21:47.661: INFO: Got endpoints: latency-svc-vpl7j [733.745226ms] Oct 6 20:21:47.678: INFO: Created: latency-svc-29xm2 Oct 6 20:21:47.690: INFO: Got endpoints: latency-svc-29xm2 [687.6483ms] Oct 6 20:21:47.751: INFO: Created: latency-svc-wgm5k Oct 6 20:21:47.754: INFO: Got endpoints: latency-svc-wgm5k [725.981798ms] Oct 6 20:21:47.796: INFO: Created: latency-svc-lj4xm Oct 6 20:21:47.811: INFO: Got endpoints: latency-svc-lj4xm [742.279773ms] Oct 6 20:21:47.837: INFO: Created: latency-svc-8jr8x Oct 6 20:21:47.926: INFO: Got endpoints: latency-svc-8jr8x [761.243826ms] Oct 6 20:21:47.928: INFO: Created: latency-svc-2hxr8 Oct 6 20:21:47.938: INFO: Got endpoints: latency-svc-2hxr8 [767.434226ms] Oct 6 20:21:47.960: INFO: Created: latency-svc-lr6pg Oct 6 20:21:47.974: INFO: Got endpoints: latency-svc-lr6pg [771.179324ms] Oct 6 20:21:48.006: INFO: Created: latency-svc-fm7xv Oct 6 20:21:48.023: INFO: Got endpoints: latency-svc-fm7xv [783.959671ms] Oct 6 20:21:48.062: INFO: Created: latency-svc-v4mkn Oct 6 20:21:48.071: INFO: Got endpoints: latency-svc-v4mkn [764.117735ms] Oct 6 20:21:48.086: INFO: Created: latency-svc-hp5xg Oct 6 20:21:48.101: INFO: Got endpoints: latency-svc-hp5xg [649.825799ms] Oct 6 20:21:48.116: INFO: Created: latency-svc-jhj86 Oct 6 20:21:48.131: INFO: Got endpoints: latency-svc-jhj86 [656.256326ms] Oct 6 20:21:48.212: INFO: Created: latency-svc-m9f48 Oct 6 20:21:48.222: INFO: Got endpoints: latency-svc-m9f48 [687.887037ms] Oct 6 20:21:48.258: INFO: Created: latency-svc-s2crc Oct 6 20:21:48.270: INFO: Got endpoints: latency-svc-s2crc [724.389124ms] Oct 6 20:21:48.367: INFO: Created: latency-svc-chp2x Oct 6 20:21:48.370: INFO: Got endpoints: latency-svc-chp2x [758.272777ms] Oct 6 20:21:48.431: INFO: Created: latency-svc-dqf2s Oct 6 20:21:48.459: INFO: Got endpoints: latency-svc-dqf2s [823.247782ms] Oct 6 20:21:48.511: INFO: Created: latency-svc-b98wt Oct 6 20:21:48.541: INFO: Got endpoints: latency-svc-b98wt [880.370126ms] Oct 6 20:21:48.572: INFO: Created: latency-svc-ckldx Oct 6 20:21:48.586: INFO: Got endpoints: latency-svc-ckldx [895.566707ms] Oct 6 20:21:48.602: INFO: Created: latency-svc-f5pkh Oct 6 20:21:48.643: INFO: Got endpoints: latency-svc-f5pkh [888.713478ms] Oct 6 20:21:48.661: INFO: Created: latency-svc-88tp8 Oct 6 20:21:48.667: INFO: Got endpoints: latency-svc-88tp8 [855.909998ms] Oct 6 20:21:48.683: INFO: Created: latency-svc-5jb9f Oct 6 20:21:48.694: INFO: Got endpoints: latency-svc-5jb9f [768.140026ms] Oct 6 20:21:48.713: INFO: Created: latency-svc-9qsj2 Oct 6 20:21:48.725: INFO: Got endpoints: latency-svc-9qsj2 [786.459005ms] Oct 6 20:21:48.788: INFO: Created: latency-svc-rtc4l Oct 6 20:21:48.792: INFO: Got endpoints: latency-svc-rtc4l [817.755372ms] Oct 6 20:21:48.861: INFO: Created: latency-svc-j9btw Oct 6 20:21:48.937: INFO: Got endpoints: latency-svc-j9btw [914.43194ms] Oct 6 20:21:48.938: INFO: Created: latency-svc-kjh4t Oct 6 20:21:48.943: INFO: Got endpoints: latency-svc-kjh4t [872.55183ms] Oct 6 20:21:48.961: INFO: Created: latency-svc-knkxq Oct 6 20:21:48.978: INFO: Got endpoints: latency-svc-knkxq [876.779276ms] Oct 6 20:21:48.998: INFO: Created: latency-svc-hgk7h Oct 6 20:21:49.008: INFO: Got endpoints: latency-svc-hgk7h [876.55791ms] Oct 6 20:21:49.030: INFO: Created: latency-svc-rkcjp Oct 6 20:21:49.086: INFO: Got endpoints: latency-svc-rkcjp [864.099204ms] Oct 6 20:21:49.092: INFO: Created: latency-svc-n42bf Oct 6 20:21:49.116: INFO: Created: latency-svc-7s5c9 Oct 6 20:21:49.116: INFO: Got endpoints: latency-svc-n42bf [846.124751ms] Oct 6 20:21:49.129: INFO: Got endpoints: latency-svc-7s5c9 [758.723343ms] Oct 6 20:21:49.145: INFO: Created: latency-svc-259j6 Oct 6 20:21:49.159: INFO: Got endpoints: latency-svc-259j6 [699.682305ms] Oct 6 20:21:49.178: INFO: Created: latency-svc-7gcfb Oct 6 20:21:49.226: INFO: Got endpoints: latency-svc-7gcfb [684.729337ms] Oct 6 20:21:49.244: INFO: Created: latency-svc-2tbrs Oct 6 20:21:49.256: INFO: Got endpoints: latency-svc-2tbrs [669.612474ms] Oct 6 20:21:49.274: INFO: Created: latency-svc-pmw4q Oct 6 20:21:49.287: INFO: Got endpoints: latency-svc-pmw4q [644.160808ms] Oct 6 20:21:49.301: INFO: Created: latency-svc-vbvqv Oct 6 20:21:49.374: INFO: Got endpoints: latency-svc-vbvqv [706.155226ms] Oct 6 20:21:49.415: INFO: Created: latency-svc-4m24c Oct 6 20:21:49.448: INFO: Got endpoints: latency-svc-4m24c [753.135933ms] Oct 6 20:21:49.519: INFO: Created: latency-svc-stpj5 Oct 6 20:21:49.546: INFO: Got endpoints: latency-svc-stpj5 [820.778678ms] Oct 6 20:21:49.549: INFO: Created: latency-svc-vn8xc Oct 6 20:21:49.557: INFO: Got endpoints: latency-svc-vn8xc [764.703302ms] Oct 6 20:21:49.583: INFO: Created: latency-svc-bpjdk Oct 6 20:21:49.606: INFO: Got endpoints: latency-svc-bpjdk [668.281287ms] Oct 6 20:21:49.655: INFO: Created: latency-svc-jnp4p Oct 6 20:21:49.667: INFO: Got endpoints: latency-svc-jnp4p [723.266963ms] Oct 6 20:21:49.700: INFO: Created: latency-svc-4j9lj Oct 6 20:21:49.714: INFO: Got endpoints: latency-svc-4j9lj [736.065339ms] Oct 6 20:21:49.742: INFO: Created: latency-svc-swfqz Oct 6 20:21:49.751: INFO: Got endpoints: latency-svc-swfqz [742.322146ms] Oct 6 20:21:49.811: INFO: Created: latency-svc-sxjkj Oct 6 20:21:49.815: INFO: Got endpoints: latency-svc-sxjkj [728.780981ms] Oct 6 20:21:49.877: INFO: Created: latency-svc-dz7gw Oct 6 20:21:49.889: INFO: Got endpoints: latency-svc-dz7gw [772.674875ms] Oct 6 20:21:49.912: INFO: Created: latency-svc-q4zbd Oct 6 20:21:49.961: INFO: Got endpoints: latency-svc-q4zbd [831.628301ms] Oct 6 20:21:49.982: INFO: Created: latency-svc-crlqj Oct 6 20:21:49.992: INFO: Got endpoints: latency-svc-crlqj [832.434774ms] Oct 6 20:21:50.023: INFO: Created: latency-svc-v8ldf Oct 6 20:21:50.034: INFO: Got endpoints: latency-svc-v8ldf [807.313168ms] Oct 6 20:21:50.052: INFO: Created: latency-svc-54kp2 Oct 6 20:21:50.093: INFO: Got endpoints: latency-svc-54kp2 [836.838198ms] Oct 6 20:21:50.117: INFO: Created: latency-svc-fsbhj Oct 6 20:21:50.132: INFO: Got endpoints: latency-svc-fsbhj [844.730254ms] Oct 6 20:21:50.162: INFO: Created: latency-svc-mh72f Oct 6 20:21:50.173: INFO: Got endpoints: latency-svc-mh72f [799.563207ms] Oct 6 20:21:50.229: INFO: Created: latency-svc-g2cbv Oct 6 20:21:50.255: INFO: Created: latency-svc-d6rrr Oct 6 20:21:50.255: INFO: Got endpoints: latency-svc-g2cbv [807.207962ms] Oct 6 20:21:50.291: INFO: Got endpoints: latency-svc-d6rrr [745.534746ms] Oct 6 20:21:50.327: INFO: Created: latency-svc-r7vtv Oct 6 20:21:50.391: INFO: Got endpoints: latency-svc-r7vtv [834.291065ms] Oct 6 20:21:50.393: INFO: Created: latency-svc-mtnxl Oct 6 20:21:50.414: INFO: Got endpoints: latency-svc-mtnxl [807.901717ms] Oct 6 20:21:50.438: INFO: Created: latency-svc-stm97 Oct 6 20:21:50.459: INFO: Got endpoints: latency-svc-stm97 [792.137982ms] Oct 6 20:21:50.483: INFO: Created: latency-svc-5x64f Oct 6 20:21:50.547: INFO: Got endpoints: latency-svc-5x64f [833.181306ms] Oct 6 20:21:50.575: INFO: Created: latency-svc-8qcx5 Oct 6 20:21:50.595: INFO: Got endpoints: latency-svc-8qcx5 [844.314495ms] Oct 6 20:21:50.623: INFO: Created: latency-svc-ngfpv Oct 6 20:21:50.638: INFO: Got endpoints: latency-svc-ngfpv [822.293203ms] Oct 6 20:21:50.672: INFO: Created: latency-svc-m7vgd Oct 6 20:21:50.676: INFO: Got endpoints: latency-svc-m7vgd [786.457296ms] Oct 6 20:21:50.723: INFO: Created: latency-svc-kzfsj Oct 6 20:21:50.735: INFO: Got endpoints: latency-svc-kzfsj [774.248447ms] Oct 6 20:21:50.753: INFO: Created: latency-svc-4kztw Oct 6 20:21:50.764: INFO: Got endpoints: latency-svc-4kztw [771.918562ms] Oct 6 20:21:50.799: INFO: Created: latency-svc-djkdt Oct 6 20:21:50.816: INFO: Got endpoints: latency-svc-djkdt [782.18486ms] Oct 6 20:21:50.839: INFO: Created: latency-svc-m4ckc Oct 6 20:21:50.849: INFO: Got endpoints: latency-svc-m4ckc [755.350747ms] Oct 6 20:21:50.873: INFO: Created: latency-svc-lcznz Oct 6 20:21:50.884: INFO: Got endpoints: latency-svc-lcznz [752.214308ms] Oct 6 20:21:50.936: INFO: Created: latency-svc-9855n Oct 6 20:21:50.939: INFO: Got endpoints: latency-svc-9855n [764.971719ms] Oct 6 20:21:50.983: INFO: Created: latency-svc-tlc5k Oct 6 20:21:51.000: INFO: Got endpoints: latency-svc-tlc5k [744.393892ms] Oct 6 20:21:51.020: INFO: Created: latency-svc-9cj2v Oct 6 20:21:51.030: INFO: Got endpoints: latency-svc-9cj2v [738.021219ms] Oct 6 20:21:51.086: INFO: Created: latency-svc-nts7z Oct 6 20:21:51.090: INFO: Got endpoints: latency-svc-nts7z [698.753671ms] Oct 6 20:21:51.136: INFO: Created: latency-svc-vrljl Oct 6 20:21:51.151: INFO: Got endpoints: latency-svc-vrljl [736.752579ms] Oct 6 20:21:51.173: INFO: Created: latency-svc-6kmkw Oct 6 20:21:51.235: INFO: Got endpoints: latency-svc-6kmkw [775.695984ms] Oct 6 20:21:51.239: INFO: Created: latency-svc-gtgft Oct 6 20:21:51.247: INFO: Got endpoints: latency-svc-gtgft [699.698592ms] Oct 6 20:21:51.266: INFO: Created: latency-svc-smwtt Oct 6 20:21:51.288: INFO: Got endpoints: latency-svc-smwtt [692.063794ms] Oct 6 20:21:51.311: INFO: Created: latency-svc-4r4m4 Oct 6 20:21:51.327: INFO: Got endpoints: latency-svc-4r4m4 [688.631967ms] Oct 6 20:21:51.381: INFO: Created: latency-svc-76vft Oct 6 20:21:51.391: INFO: Got endpoints: latency-svc-76vft [715.403641ms] Oct 6 20:21:51.409: INFO: Created: latency-svc-knds7 Oct 6 20:21:51.423: INFO: Got endpoints: latency-svc-knds7 [687.601409ms] Oct 6 20:21:51.451: INFO: Created: latency-svc-n9j56 Oct 6 20:21:51.465: INFO: Got endpoints: latency-svc-n9j56 [700.872128ms] Oct 6 20:21:51.523: INFO: Created: latency-svc-nmx6l Oct 6 20:21:51.526: INFO: Got endpoints: latency-svc-nmx6l [709.988311ms] Oct 6 20:21:51.551: INFO: Created: latency-svc-bws6k Oct 6 20:21:51.587: INFO: Got endpoints: latency-svc-bws6k [737.673317ms] Oct 6 20:21:51.588: INFO: Latencies: [127.649581ms 138.005846ms 205.364388ms 234.363415ms 271.471336ms 317.588534ms 350.862434ms 384.913913ms 473.184683ms 525.205834ms 567.330486ms 624.76469ms 644.160808ms 646.751873ms 649.825799ms 653.303009ms 656.256326ms 663.796064ms 668.281287ms 669.612474ms 678.039355ms 684.729337ms 687.601409ms 687.6483ms 687.887037ms 688.631967ms 692.063794ms 695.435912ms 696.93041ms 697.045109ms 698.753671ms 699.682305ms 699.698592ms 700.872128ms 702.882552ms 706.155226ms 707.435422ms 709.988311ms 715.403641ms 717.130336ms 721.741739ms 723.266963ms 724.389124ms 725.981798ms 726.606413ms 728.780981ms 732.791692ms 733.745226ms 734.658904ms 734.776263ms 736.065339ms 736.752579ms 737.673317ms 738.021219ms 742.279773ms 742.322146ms 744.393892ms 745.534746ms 745.731853ms 746.37922ms 748.967298ms 752.214308ms 753.135933ms 755.350747ms 755.410678ms 756.175294ms 758.272777ms 758.633214ms 758.723343ms 761.243826ms 763.500558ms 764.117735ms 764.391097ms 764.703302ms 764.971719ms 766.190813ms 767.434226ms 768.140026ms 771.179324ms 771.918562ms 772.433525ms 772.445847ms 772.674875ms 774.248447ms 775.695984ms 776.067607ms 776.794747ms 779.925088ms 782.18486ms 783.959671ms 785.727539ms 786.168097ms 786.457296ms 786.459005ms 786.679284ms 789.862295ms 789.878813ms 789.882973ms 791.622137ms 792.137982ms 799.563207ms 803.224433ms 804.120709ms 807.207962ms 807.313168ms 807.901717ms 809.233365ms 812.456782ms 817.755372ms 818.835186ms 820.629743ms 820.778678ms 822.293203ms 823.247782ms 826.188387ms 828.719042ms 831.628301ms 832.434774ms 833.181306ms 833.190512ms 834.291065ms 836.838198ms 837.418355ms 844.314495ms 844.730254ms 846.124751ms 855.909998ms 856.471268ms 857.47376ms 861.597337ms 861.965078ms 863.009033ms 864.099204ms 864.338767ms 871.591047ms 872.55183ms 876.55791ms 876.779276ms 880.370126ms 882.029364ms 885.099665ms 885.965205ms 886.778543ms 888.713478ms 895.566707ms 901.937221ms 908.861295ms 912.125378ms 914.43194ms 927.109079ms 930.213308ms 947.55142ms 949.717758ms 952.950997ms 953.588075ms 971.226127ms 974.347774ms 975.639734ms 977.763094ms 978.646292ms 979.273738ms 985.509596ms 985.90166ms 999.243771ms 1.000791541s 1.001674959s 1.004689566s 1.011739664s 1.011744159s 1.016315097s 1.016752923s 1.017642905s 1.018006058s 1.025161899s 1.034404513s 1.034771637s 1.039027621s 1.04009658s 1.046915499s 1.049803345s 1.050768168s 1.053901101s 1.053923034s 1.067799539s 1.077338184s 1.077567496s 1.077721591s 1.084160345s 1.09475833s 1.095556764s 1.096486676s 1.097552485s 1.100099846s 1.106259776s 1.115296436s 1.121276864s 1.126343248s 1.157284928s 1.164105784s 1.172365588s] Oct 6 20:21:51.589: INFO: 50 %ile: 799.563207ms Oct 6 20:21:51.589: INFO: 90 %ile: 1.050768168s Oct 6 20:21:51.589: INFO: 99 %ile: 1.164105784s Oct 6 20:21:51.589: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Oct 6 20:21:51.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-7404" for this suite. • [SLOW TEST:14.637 seconds] [sig-network] Service endpoints latency /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":278,"completed":42,"skipped":523,"failed":0} SSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Oct 6 20:21:51.622: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-089295e9-721b-4c8d-847f-4492d4ec6df6 STEP: Creating a pod to test consume configMaps Oct 6 20:21:51.718: INFO: Waiting up to 5m0s for pod "pod-configmaps-1b692d46-18cc-422f-80a7-999b4674e1c7" in namespace "configmap-2241" to be "success or failure" Oct 6 20:21:51.721: INFO: Pod "pod-configmaps-1b692d46-18cc-422f-80a7-999b4674e1c7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.33677ms Oct 6 20:21:53.729: INFO: Pod "pod-configmaps-1b692d46-18cc-422f-80a7-999b4674e1c7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010725029s Oct 6 20:21:55.735: INFO: Pod "pod-configmaps-1b692d46-18cc-422f-80a7-999b4674e1c7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017355832s STEP: Saw pod success Oct 6 20:21:55.736: INFO: Pod "pod-configmaps-1b692d46-18cc-422f-80a7-999b4674e1c7" satisfied condition "success or failure" Oct 6 20:21:55.740: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-1b692d46-18cc-422f-80a7-999b4674e1c7 container configmap-volume-test: STEP: delete the pod Oct 6 20:21:55.841: INFO: Waiting for pod pod-configmaps-1b692d46-18cc-422f-80a7-999b4674e1c7 to disappear Oct 6 20:21:55.863: INFO: Pod pod-configmaps-1b692d46-18cc-422f-80a7-999b4674e1c7 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Oct 6 20:21:55.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2241" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":43,"skipped":531,"failed":0} SS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Oct 6 20:21:55.877: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Oct 6 20:22:08.109: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-787 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 6 20:22:08.109: INFO: >>> kubeConfig: /root/.kube/config I1006 20:22:08.195692 7 log.go:172] (0x40017582c0) (0x4001bb5540) Create stream I1006 20:22:08.196006 7 log.go:172] (0x40017582c0) (0x4001bb5540) Stream added, broadcasting: 1 I1006 20:22:08.200151 7 log.go:172] (0x40017582c0) Reply frame received for 1 I1006 20:22:08.200287 7 log.go:172] (0x40017582c0) (0x400120e140) Create stream I1006 20:22:08.200361 7 log.go:172] (0x40017582c0) (0x400120e140) Stream added, broadcasting: 3 I1006 20:22:08.201862 7 log.go:172] (0x40017582c0) Reply frame received for 3 I1006 20:22:08.202075 7 log.go:172] (0x40017582c0) (0x4001bb55e0) Create stream I1006 20:22:08.202200 7 log.go:172] (0x40017582c0) (0x4001bb55e0) Stream added, broadcasting: 5 I1006 20:22:08.203932 7 log.go:172] (0x40017582c0) Reply frame received for 5 I1006 20:22:08.277223 7 log.go:172] (0x40017582c0) Data frame received for 5 I1006 20:22:08.277351 7 log.go:172] (0x4001bb55e0) (5) Data frame handling I1006 20:22:08.277489 7 log.go:172] (0x40017582c0) Data frame received for 3 I1006 20:22:08.277612 7 log.go:172] (0x400120e140) (3) Data frame handling I1006 20:22:08.277730 7 log.go:172] (0x400120e140) (3) Data frame sent I1006 20:22:08.277823 7 log.go:172] (0x40017582c0) Data frame received for 3 I1006 20:22:08.277932 7 log.go:172] (0x400120e140) (3) Data frame handling I1006 20:22:08.278231 7 log.go:172] (0x40017582c0) Data frame received for 1 I1006 20:22:08.278304 7 log.go:172] (0x4001bb5540) (1) Data frame handling I1006 20:22:08.278380 7 log.go:172] (0x4001bb5540) (1) Data frame sent I1006 20:22:08.279135 7 log.go:172] (0x40017582c0) (0x4001bb5540) Stream removed, broadcasting: 1 I1006 20:22:08.279227 7 log.go:172] (0x40017582c0) Go away received I1006 20:22:08.279478 7 log.go:172] (0x40017582c0) (0x4001bb5540) Stream removed, broadcasting: 1 I1006 20:22:08.279595 7 log.go:172] (0x40017582c0) (0x400120e140) Stream removed, broadcasting: 3 I1006 20:22:08.279688 7 log.go:172] (0x40017582c0) (0x4001bb55e0) Stream removed, broadcasting: 5 Oct 6 20:22:08.279: INFO: Exec stderr: "" Oct 6 20:22:08.280: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-787 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 6 20:22:08.280: INFO: >>> kubeConfig: /root/.kube/config I1006 20:22:08.348436 7 log.go:172] (0x4005a16370) (0x400120e960) Create stream I1006 20:22:08.348710 7 log.go:172] (0x4005a16370) (0x400120e960) Stream added, broadcasting: 1 I1006 20:22:08.353781 7 log.go:172] (0x4005a16370) Reply frame received for 1 I1006 20:22:08.353950 7 log.go:172] (0x4005a16370) (0x400120ea00) Create stream I1006 20:22:08.354016 7 log.go:172] (0x4005a16370) (0x400120ea00) Stream added, broadcasting: 3 I1006 20:22:08.355674 7 log.go:172] (0x4005a16370) Reply frame received for 3 I1006 20:22:08.355786 7 log.go:172] (0x4005a16370) (0x4001bb5680) Create stream I1006 20:22:08.355847 7 log.go:172] (0x4005a16370) (0x4001bb5680) Stream added, broadcasting: 5 I1006 20:22:08.357593 7 log.go:172] (0x4005a16370) Reply frame received for 5 I1006 20:22:08.400124 7 log.go:172] (0x4005a16370) Data frame received for 3 I1006 20:22:08.400288 7 log.go:172] (0x400120ea00) (3) Data frame handling I1006 20:22:08.400399 7 log.go:172] (0x4005a16370) Data frame received for 5 I1006 20:22:08.400540 7 log.go:172] (0x4001bb5680) (5) Data frame handling I1006 20:22:08.400671 7 log.go:172] (0x400120ea00) (3) Data frame sent I1006 20:22:08.400824 7 log.go:172] (0x4005a16370) Data frame received for 3 I1006 20:22:08.401135 7 log.go:172] (0x400120ea00) (3) Data frame handling I1006 20:22:08.401276 7 log.go:172] (0x4005a16370) Data frame received for 1 I1006 20:22:08.401368 7 log.go:172] (0x400120e960) (1) Data frame handling I1006 20:22:08.401473 7 log.go:172] (0x400120e960) (1) Data frame sent I1006 20:22:08.401591 7 log.go:172] (0x4005a16370) (0x400120e960) Stream removed, broadcasting: 1 I1006 20:22:08.401725 7 log.go:172] (0x4005a16370) Go away received I1006 20:22:08.401911 7 log.go:172] (0x4005a16370) (0x400120e960) Stream removed, broadcasting: 1 I1006 20:22:08.402037 7 log.go:172] (0x4005a16370) (0x400120ea00) Stream removed, broadcasting: 3 I1006 20:22:08.402138 7 log.go:172] (0x4005a16370) (0x4001bb5680) Stream removed, broadcasting: 5 Oct 6 20:22:08.402: INFO: Exec stderr: "" Oct 6 20:22:08.402: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-787 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 6 20:22:08.402: INFO: >>> kubeConfig: /root/.kube/config I1006 20:22:08.521604 7 log.go:172] (0x400291b970) (0x40027fcaa0) Create stream I1006 20:22:08.521872 7 log.go:172] (0x400291b970) (0x40027fcaa0) Stream added, broadcasting: 1 I1006 20:22:08.526573 7 log.go:172] (0x400291b970) Reply frame received for 1 I1006 20:22:08.526715 7 log.go:172] (0x400291b970) (0x40012d61e0) Create stream I1006 20:22:08.526783 7 log.go:172] (0x400291b970) (0x40012d61e0) Stream added, broadcasting: 3 I1006 20:22:08.528479 7 log.go:172] (0x400291b970) Reply frame received for 3 I1006 20:22:08.528602 7 log.go:172] (0x400291b970) (0x40012d6280) Create stream I1006 20:22:08.528687 7 log.go:172] (0x400291b970) (0x40012d6280) Stream added, broadcasting: 5 I1006 20:22:08.530350 7 log.go:172] (0x400291b970) Reply frame received for 5 I1006 20:22:08.594053 7 log.go:172] (0x400291b970) Data frame received for 5 I1006 20:22:08.594261 7 log.go:172] (0x40012d6280) (5) Data frame handling I1006 20:22:08.594371 7 log.go:172] (0x400291b970) Data frame received for 3 I1006 20:22:08.594495 7 log.go:172] (0x40012d61e0) (3) Data frame handling I1006 20:22:08.594629 7 log.go:172] (0x40012d61e0) (3) Data frame sent I1006 20:22:08.594735 7 log.go:172] (0x400291b970) Data frame received for 3 I1006 20:22:08.594830 7 log.go:172] (0x40012d61e0) (3) Data frame handling I1006 20:22:08.594999 7 log.go:172] (0x400291b970) Data frame received for 1 I1006 20:22:08.595110 7 log.go:172] (0x40027fcaa0) (1) Data frame handling I1006 20:22:08.595218 7 log.go:172] (0x40027fcaa0) (1) Data frame sent I1006 20:22:08.595359 7 log.go:172] (0x400291b970) (0x40027fcaa0) Stream removed, broadcasting: 1 I1006 20:22:08.595504 7 log.go:172] (0x400291b970) Go away received I1006 20:22:08.595893 7 log.go:172] (0x400291b970) (0x40027fcaa0) Stream removed, broadcasting: 1 I1006 20:22:08.596025 7 log.go:172] (0x400291b970) (0x40012d61e0) Stream removed, broadcasting: 3 I1006 20:22:08.596128 7 log.go:172] (0x400291b970) (0x40012d6280) Stream removed, broadcasting: 5 Oct 6 20:22:08.596: INFO: Exec stderr: "" Oct 6 20:22:08.596: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-787 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 6 20:22:08.596: INFO: >>> kubeConfig: /root/.kube/config I1006 20:22:08.664143 7 log.go:172] (0x4002ff84d0) (0x4001efd400) Create stream I1006 20:22:08.664317 7 log.go:172] (0x4002ff84d0) (0x4001efd400) Stream added, broadcasting: 1 I1006 20:22:08.667381 7 log.go:172] (0x4002ff84d0) Reply frame received for 1 I1006 20:22:08.667510 7 log.go:172] (0x4002ff84d0) (0x400144e0a0) Create stream I1006 20:22:08.667574 7 log.go:172] (0x4002ff84d0) (0x400144e0a0) Stream added, broadcasting: 3 I1006 20:22:08.668766 7 log.go:172] (0x4002ff84d0) Reply frame received for 3 I1006 20:22:08.669052 7 log.go:172] (0x4002ff84d0) (0x400120eaa0) Create stream I1006 20:22:08.669119 7 log.go:172] (0x4002ff84d0) (0x400120eaa0) Stream added, broadcasting: 5 I1006 20:22:08.670389 7 log.go:172] (0x4002ff84d0) Reply frame received for 5 I1006 20:22:08.731305 7 log.go:172] (0x4002ff84d0) Data frame received for 5 I1006 20:22:08.731449 7 log.go:172] (0x400120eaa0) (5) Data frame handling I1006 20:22:08.731595 7 log.go:172] (0x4002ff84d0) Data frame received for 3 I1006 20:22:08.731730 7 log.go:172] (0x400144e0a0) (3) Data frame handling I1006 20:22:08.731863 7 log.go:172] (0x400144e0a0) (3) Data frame sent I1006 20:22:08.731961 7 log.go:172] (0x4002ff84d0) Data frame received for 3 I1006 20:22:08.732023 7 log.go:172] (0x400144e0a0) (3) Data frame handling I1006 20:22:08.732314 7 log.go:172] (0x4002ff84d0) Data frame received for 1 I1006 20:22:08.732416 7 log.go:172] (0x4001efd400) (1) Data frame handling I1006 20:22:08.732516 7 log.go:172] (0x4001efd400) (1) Data frame sent I1006 20:22:08.732605 7 log.go:172] (0x4002ff84d0) (0x4001efd400) Stream removed, broadcasting: 1 I1006 20:22:08.732702 7 log.go:172] (0x4002ff84d0) Go away received I1006 20:22:08.732962 7 log.go:172] (0x4002ff84d0) (0x4001efd400) Stream removed, broadcasting: 1 I1006 20:22:08.733073 7 log.go:172] (0x4002ff84d0) (0x400144e0a0) Stream removed, broadcasting: 3 I1006 20:22:08.733199 7 log.go:172] (0x4002ff84d0) (0x400120eaa0) Stream removed, broadcasting: 5 Oct 6 20:22:08.733: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Oct 6 20:22:08.733: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-787 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 6 20:22:08.733: INFO: >>> kubeConfig: /root/.kube/config I1006 20:22:08.787769 7 log.go:172] (0x4001a10420) (0x400144e820) Create stream I1006 20:22:08.788023 7 log.go:172] (0x4001a10420) (0x400144e820) Stream added, broadcasting: 1 I1006 20:22:08.791291 7 log.go:172] (0x4001a10420) Reply frame received for 1 I1006 20:22:08.791484 7 log.go:172] (0x4001a10420) (0x400120eb40) Create stream I1006 20:22:08.791595 7 log.go:172] (0x4001a10420) (0x400120eb40) Stream added, broadcasting: 3 I1006 20:22:08.792956 7 log.go:172] (0x4001a10420) Reply frame received for 3 I1006 20:22:08.793105 7 log.go:172] (0x4001a10420) (0x400144e8c0) Create stream I1006 20:22:08.793174 7 log.go:172] (0x4001a10420) (0x400144e8c0) Stream added, broadcasting: 5 I1006 20:22:08.794354 7 log.go:172] (0x4001a10420) Reply frame received for 5 I1006 20:22:08.848514 7 log.go:172] (0x4001a10420) Data frame received for 3 I1006 20:22:08.848640 7 log.go:172] (0x400120eb40) (3) Data frame handling I1006 20:22:08.848722 7 log.go:172] (0x4001a10420) Data frame received for 5 I1006 20:22:08.848808 7 log.go:172] (0x400144e8c0) (5) Data frame handling I1006 20:22:08.848937 7 log.go:172] (0x400120eb40) (3) Data frame sent I1006 20:22:08.849010 7 log.go:172] (0x4001a10420) Data frame received for 3 I1006 20:22:08.849087 7 log.go:172] (0x400120eb40) (3) Data frame handling I1006 20:22:08.849713 7 log.go:172] (0x4001a10420) Data frame received for 1 I1006 20:22:08.849835 7 log.go:172] (0x400144e820) (1) Data frame handling I1006 20:22:08.849934 7 log.go:172] (0x400144e820) (1) Data frame sent I1006 20:22:08.850022 7 log.go:172] (0x4001a10420) (0x400144e820) Stream removed, broadcasting: 1 I1006 20:22:08.850126 7 log.go:172] (0x4001a10420) Go away received I1006 20:22:08.850440 7 log.go:172] (0x4001a10420) (0x400144e820) Stream removed, broadcasting: 1 I1006 20:22:08.850557 7 log.go:172] (0x4001a10420) (0x400120eb40) Stream removed, broadcasting: 3 I1006 20:22:08.850686 7 log.go:172] (0x4001a10420) (0x400144e8c0) Stream removed, broadcasting: 5 Oct 6 20:22:08.850: INFO: Exec stderr: "" Oct 6 20:22:08.850: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-787 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 6 20:22:08.851: INFO: >>> kubeConfig: /root/.kube/config I1006 20:22:08.915955 7 log.go:172] (0x40017589a0) (0x4001bb5900) Create stream I1006 20:22:08.916138 7 log.go:172] (0x40017589a0) (0x4001bb5900) Stream added, broadcasting: 1 I1006 20:22:08.920262 7 log.go:172] (0x40017589a0) Reply frame received for 1 I1006 20:22:08.920425 7 log.go:172] (0x40017589a0) (0x400120ebe0) Create stream I1006 20:22:08.920505 7 log.go:172] (0x40017589a0) (0x400120ebe0) Stream added, broadcasting: 3 I1006 20:22:08.922042 7 log.go:172] (0x40017589a0) Reply frame received for 3 I1006 20:22:08.922161 7 log.go:172] (0x40017589a0) (0x400120ec80) Create stream I1006 20:22:08.922231 7 log.go:172] (0x40017589a0) (0x400120ec80) Stream added, broadcasting: 5 I1006 20:22:08.923279 7 log.go:172] (0x40017589a0) Reply frame received for 5 I1006 20:22:08.983267 7 log.go:172] (0x40017589a0) Data frame received for 3 I1006 20:22:08.983457 7 log.go:172] (0x400120ebe0) (3) Data frame handling I1006 20:22:08.983555 7 log.go:172] (0x400120ebe0) (3) Data frame sent I1006 20:22:08.983640 7 log.go:172] (0x40017589a0) Data frame received for 3 I1006 20:22:08.983717 7 log.go:172] (0x400120ebe0) (3) Data frame handling I1006 20:22:08.983875 7 log.go:172] (0x40017589a0) Data frame received for 5 I1006 20:22:08.983966 7 log.go:172] (0x400120ec80) (5) Data frame handling I1006 20:22:08.984681 7 log.go:172] (0x40017589a0) Data frame received for 1 I1006 20:22:08.984784 7 log.go:172] (0x4001bb5900) (1) Data frame handling I1006 20:22:08.984987 7 log.go:172] (0x4001bb5900) (1) Data frame sent I1006 20:22:08.985090 7 log.go:172] (0x40017589a0) (0x4001bb5900) Stream removed, broadcasting: 1 I1006 20:22:08.985234 7 log.go:172] (0x40017589a0) Go away received I1006 20:22:08.985912 7 log.go:172] (0x40017589a0) (0x4001bb5900) Stream removed, broadcasting: 1 I1006 20:22:08.986012 7 log.go:172] (0x40017589a0) (0x400120ebe0) Stream removed, broadcasting: 3 I1006 20:22:08.986121 7 log.go:172] (0x40017589a0) (0x400120ec80) Stream removed, broadcasting: 5 Oct 6 20:22:08.986: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Oct 6 20:22:08.986: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-787 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 6 20:22:08.986: INFO: >>> kubeConfig: /root/.kube/config I1006 20:22:09.059863 7 log.go:172] (0x4001a10b00) (0x400144ef00) Create stream I1006 20:22:09.060080 7 log.go:172] (0x4001a10b00) (0x400144ef00) Stream added, broadcasting: 1 I1006 20:22:09.063789 7 log.go:172] (0x4001a10b00) Reply frame received for 1 I1006 20:22:09.063942 7 log.go:172] (0x4001a10b00) (0x40027fcb40) Create stream I1006 20:22:09.064023 7 log.go:172] (0x4001a10b00) (0x40027fcb40) Stream added, broadcasting: 3 I1006 20:22:09.065495 7 log.go:172] (0x4001a10b00) Reply frame received for 3 I1006 20:22:09.065656 7 log.go:172] (0x4001a10b00) (0x4001bb59a0) Create stream I1006 20:22:09.065743 7 log.go:172] (0x4001a10b00) (0x4001bb59a0) Stream added, broadcasting: 5 I1006 20:22:09.067130 7 log.go:172] (0x4001a10b00) Reply frame received for 5 I1006 20:22:09.131043 7 log.go:172] (0x4001a10b00) Data frame received for 5 I1006 20:22:09.131229 7 log.go:172] (0x4001bb59a0) (5) Data frame handling I1006 20:22:09.131351 7 log.go:172] (0x4001a10b00) Data frame received for 3 I1006 20:22:09.131492 7 log.go:172] (0x40027fcb40) (3) Data frame handling I1006 20:22:09.131636 7 log.go:172] (0x40027fcb40) (3) Data frame sent I1006 20:22:09.131758 7 log.go:172] (0x4001a10b00) Data frame received for 3 I1006 20:22:09.131864 7 log.go:172] (0x40027fcb40) (3) Data frame handling I1006 20:22:09.133053 7 log.go:172] (0x4001a10b00) Data frame received for 1 I1006 20:22:09.133205 7 log.go:172] (0x400144ef00) (1) Data frame handling I1006 20:22:09.133357 7 log.go:172] (0x400144ef00) (1) Data frame sent I1006 20:22:09.133486 7 log.go:172] (0x4001a10b00) (0x400144ef00) Stream removed, broadcasting: 1 I1006 20:22:09.133646 7 log.go:172] (0x4001a10b00) Go away received I1006 20:22:09.133967 7 log.go:172] (0x4001a10b00) (0x400144ef00) Stream removed, broadcasting: 1 I1006 20:22:09.134072 7 log.go:172] (0x4001a10b00) (0x40027fcb40) Stream removed, broadcasting: 3 I1006 20:22:09.134160 7 log.go:172] (0x4001a10b00) (0x4001bb59a0) Stream removed, broadcasting: 5 Oct 6 20:22:09.134: INFO: Exec stderr: "" Oct 6 20:22:09.134: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-787 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 6 20:22:09.134: INFO: >>> kubeConfig: /root/.kube/config I1006 20:22:09.188111 7 log.go:172] (0x4001758fd0) (0x4001622000) Create stream I1006 20:22:09.188245 7 log.go:172] (0x4001758fd0) (0x4001622000) Stream added, broadcasting: 1 I1006 20:22:09.191762 7 log.go:172] (0x4001758fd0) Reply frame received for 1 I1006 20:22:09.191900 7 log.go:172] (0x4001758fd0) (0x40027fcbe0) Create stream I1006 20:22:09.191971 7 log.go:172] (0x4001758fd0) (0x40027fcbe0) Stream added, broadcasting: 3 I1006 20:22:09.193319 7 log.go:172] (0x4001758fd0) Reply frame received for 3 I1006 20:22:09.193468 7 log.go:172] (0x4001758fd0) (0x40016220a0) Create stream I1006 20:22:09.193547 7 log.go:172] (0x4001758fd0) (0x40016220a0) Stream added, broadcasting: 5 I1006 20:22:09.194993 7 log.go:172] (0x4001758fd0) Reply frame received for 5 I1006 20:22:09.252703 7 log.go:172] (0x4001758fd0) Data frame received for 5 I1006 20:22:09.252929 7 log.go:172] (0x40016220a0) (5) Data frame handling I1006 20:22:09.253033 7 log.go:172] (0x4001758fd0) Data frame received for 3 I1006 20:22:09.253123 7 log.go:172] (0x40027fcbe0) (3) Data frame handling I1006 20:22:09.253212 7 log.go:172] (0x40027fcbe0) (3) Data frame sent I1006 20:22:09.253277 7 log.go:172] (0x4001758fd0) Data frame received for 3 I1006 20:22:09.253332 7 log.go:172] (0x40027fcbe0) (3) Data frame handling I1006 20:22:09.253977 7 log.go:172] (0x4001758fd0) Data frame received for 1 I1006 20:22:09.254059 7 log.go:172] (0x4001622000) (1) Data frame handling I1006 20:22:09.254150 7 log.go:172] (0x4001622000) (1) Data frame sent I1006 20:22:09.254231 7 log.go:172] (0x4001758fd0) (0x4001622000) Stream removed, broadcasting: 1 I1006 20:22:09.254343 7 log.go:172] (0x4001758fd0) Go away received I1006 20:22:09.254667 7 log.go:172] (0x4001758fd0) (0x4001622000) Stream removed, broadcasting: 1 I1006 20:22:09.254752 7 log.go:172] (0x4001758fd0) (0x40027fcbe0) Stream removed, broadcasting: 3 I1006 20:22:09.254823 7 log.go:172] (0x4001758fd0) (0x40016220a0) Stream removed, broadcasting: 5 Oct 6 20:22:09.254: INFO: Exec stderr: "" Oct 6 20:22:09.255: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-787 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 6 20:22:09.255: INFO: >>> kubeConfig: /root/.kube/config I1006 20:22:09.310630 7 log.go:172] (0x4001759600) (0x40016223c0) Create stream I1006 20:22:09.310794 7 log.go:172] (0x4001759600) (0x40016223c0) Stream added, broadcasting: 1 I1006 20:22:09.315075 7 log.go:172] (0x4001759600) Reply frame received for 1 I1006 20:22:09.315390 7 log.go:172] (0x4001759600) (0x40027fcd20) Create stream I1006 20:22:09.315508 7 log.go:172] (0x4001759600) (0x40027fcd20) Stream added, broadcasting: 3 I1006 20:22:09.317413 7 log.go:172] (0x4001759600) Reply frame received for 3 I1006 20:22:09.317557 7 log.go:172] (0x4001759600) (0x4001622460) Create stream I1006 20:22:09.317632 7 log.go:172] (0x4001759600) (0x4001622460) Stream added, broadcasting: 5 I1006 20:22:09.320249 7 log.go:172] (0x4001759600) Reply frame received for 5 I1006 20:22:09.382625 7 log.go:172] (0x4001759600) Data frame received for 5 I1006 20:22:09.382789 7 log.go:172] (0x4001622460) (5) Data frame handling I1006 20:22:09.382922 7 log.go:172] (0x4001759600) Data frame received for 3 I1006 20:22:09.383032 7 log.go:172] (0x40027fcd20) (3) Data frame handling I1006 20:22:09.383147 7 log.go:172] (0x40027fcd20) (3) Data frame sent I1006 20:22:09.383223 7 log.go:172] (0x4001759600) Data frame received for 3 I1006 20:22:09.383288 7 log.go:172] (0x40027fcd20) (3) Data frame handling I1006 20:22:09.383842 7 log.go:172] (0x4001759600) Data frame received for 1 I1006 20:22:09.383927 7 log.go:172] (0x40016223c0) (1) Data frame handling I1006 20:22:09.384009 7 log.go:172] (0x40016223c0) (1) Data frame sent I1006 20:22:09.384085 7 log.go:172] (0x4001759600) (0x40016223c0) Stream removed, broadcasting: 1 I1006 20:22:09.384167 7 log.go:172] (0x4001759600) Go away received I1006 20:22:09.384471 7 log.go:172] (0x4001759600) (0x40016223c0) Stream removed, broadcasting: 1 I1006 20:22:09.384569 7 log.go:172] (0x4001759600) (0x40027fcd20) Stream removed, broadcasting: 3 I1006 20:22:09.384655 7 log.go:172] (0x4001759600) (0x4001622460) Stream removed, broadcasting: 5 Oct 6 20:22:09.384: INFO: Exec stderr: "" Oct 6 20:22:09.384: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-787 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 6 20:22:09.385: INFO: >>> kubeConfig: /root/.kube/config I1006 20:22:09.452356 7 log.go:172] (0x4001a11130) (0x400144f0e0) Create stream I1006 20:22:09.452534 7 log.go:172] (0x4001a11130) (0x400144f0e0) Stream added, broadcasting: 1 I1006 20:22:09.455978 7 log.go:172] (0x4001a11130) Reply frame received for 1 I1006 20:22:09.456105 7 log.go:172] (0x4001a11130) (0x400144f220) Create stream I1006 20:22:09.456167 7 log.go:172] (0x4001a11130) (0x400144f220) Stream added, broadcasting: 3 I1006 20:22:09.457358 7 log.go:172] (0x4001a11130) Reply frame received for 3 I1006 20:22:09.457503 7 log.go:172] (0x4001a11130) (0x4001efd4a0) Create stream I1006 20:22:09.457599 7 log.go:172] (0x4001a11130) (0x4001efd4a0) Stream added, broadcasting: 5 I1006 20:22:09.458981 7 log.go:172] (0x4001a11130) Reply frame received for 5 I1006 20:22:09.510152 7 log.go:172] (0x4001a11130) Data frame received for 3 I1006 20:22:09.510301 7 log.go:172] (0x400144f220) (3) Data frame handling I1006 20:22:09.510389 7 log.go:172] (0x400144f220) (3) Data frame sent I1006 20:22:09.510467 7 log.go:172] (0x4001a11130) Data frame received for 3 I1006 20:22:09.510544 7 log.go:172] (0x400144f220) (3) Data frame handling I1006 20:22:09.510692 7 log.go:172] (0x4001a11130) Data frame received for 5 I1006 20:22:09.510825 7 log.go:172] (0x4001efd4a0) (5) Data frame handling I1006 20:22:09.511368 7 log.go:172] (0x4001a11130) Data frame received for 1 I1006 20:22:09.511445 7 log.go:172] (0x400144f0e0) (1) Data frame handling I1006 20:22:09.511521 7 log.go:172] (0x400144f0e0) (1) Data frame sent I1006 20:22:09.511604 7 log.go:172] (0x4001a11130) (0x400144f0e0) Stream removed, broadcasting: 1 I1006 20:22:09.511702 7 log.go:172] (0x4001a11130) Go away received I1006 20:22:09.511966 7 log.go:172] (0x4001a11130) (0x400144f0e0) Stream removed, broadcasting: 1 I1006 20:22:09.512061 7 log.go:172] (0x4001a11130) (0x400144f220) Stream removed, broadcasting: 3 I1006 20:22:09.512134 7 log.go:172] (0x4001a11130) (0x4001efd4a0) Stream removed, broadcasting: 5 Oct 6 20:22:09.512: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Oct 6 20:22:09.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-787" for this suite. • [SLOW TEST:13.725 seconds] [k8s.io] KubeletManagedEtcHosts /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":44,"skipped":533,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Oct 6 20:22:09.607: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Oct 6 20:22:09.817: INFO: (0) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Docker Containers
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 20:22:16.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-8523" for this suite.

• [SLOW TEST:6.307 seconds]
[k8s.io] Docker Containers
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":278,"completed":46,"skipped":608,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 20:22:16.884: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-map-9315ca3e-4409-4251-a159-b85a0439da9f
STEP: Creating a pod to test consume configMaps
Oct  6 20:22:17.063: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-fbe8e7f4-68f7-4e10-bf7a-54d962b9d5ab" in namespace "projected-7133" to be "success or failure"
Oct  6 20:22:17.092: INFO: Pod "pod-projected-configmaps-fbe8e7f4-68f7-4e10-bf7a-54d962b9d5ab": Phase="Pending", Reason="", readiness=false. Elapsed: 29.167665ms
Oct  6 20:22:19.099: INFO: Pod "pod-projected-configmaps-fbe8e7f4-68f7-4e10-bf7a-54d962b9d5ab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036113939s
Oct  6 20:22:21.129: INFO: Pod "pod-projected-configmaps-fbe8e7f4-68f7-4e10-bf7a-54d962b9d5ab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.06634928s
STEP: Saw pod success
Oct  6 20:22:21.130: INFO: Pod "pod-projected-configmaps-fbe8e7f4-68f7-4e10-bf7a-54d962b9d5ab" satisfied condition "success or failure"
Oct  6 20:22:21.164: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-fbe8e7f4-68f7-4e10-bf7a-54d962b9d5ab container projected-configmap-volume-test: 
STEP: delete the pod
Oct  6 20:22:21.235: INFO: Waiting for pod pod-projected-configmaps-fbe8e7f4-68f7-4e10-bf7a-54d962b9d5ab to disappear
Oct  6 20:22:21.249: INFO: Pod pod-projected-configmaps-fbe8e7f4-68f7-4e10-bf7a-54d962b9d5ab no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 20:22:21.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7133" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":47,"skipped":631,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 20:22:21.339: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Oct  6 20:22:21.475: INFO: Waiting up to 5m0s for pod "downward-api-b01b10b0-0d64-453d-8495-710540daceb4" in namespace "downward-api-5542" to be "success or failure"
Oct  6 20:22:21.526: INFO: Pod "downward-api-b01b10b0-0d64-453d-8495-710540daceb4": Phase="Pending", Reason="", readiness=false. Elapsed: 50.417778ms
Oct  6 20:22:23.860: INFO: Pod "downward-api-b01b10b0-0d64-453d-8495-710540daceb4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.38442368s
Oct  6 20:22:25.883: INFO: Pod "downward-api-b01b10b0-0d64-453d-8495-710540daceb4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.407104056s
STEP: Saw pod success
Oct  6 20:22:25.883: INFO: Pod "downward-api-b01b10b0-0d64-453d-8495-710540daceb4" satisfied condition "success or failure"
Oct  6 20:22:25.896: INFO: Trying to get logs from node jerma-worker2 pod downward-api-b01b10b0-0d64-453d-8495-710540daceb4 container dapi-container: 
STEP: delete the pod
Oct  6 20:22:25.933: INFO: Waiting for pod downward-api-b01b10b0-0d64-453d-8495-710540daceb4 to disappear
Oct  6 20:22:25.948: INFO: Pod downward-api-b01b10b0-0d64-453d-8495-710540daceb4 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 20:22:25.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5542" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":278,"completed":48,"skipped":831,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 20:22:25.964: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should have monotonically increasing restart count [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod liveness-c40b32d4-255c-42d4-a836-bbbc462fe56e in namespace container-probe-5901
Oct  6 20:22:30.382: INFO: Started pod liveness-c40b32d4-255c-42d4-a836-bbbc462fe56e in namespace container-probe-5901
STEP: checking the pod's current state and verifying that restartCount is present
Oct  6 20:22:30.388: INFO: Initial restart count of pod liveness-c40b32d4-255c-42d4-a836-bbbc462fe56e is 0
Oct  6 20:22:42.461: INFO: Restart count of pod container-probe-5901/liveness-c40b32d4-255c-42d4-a836-bbbc462fe56e is now 1 (12.072622807s elapsed)
Oct  6 20:23:02.528: INFO: Restart count of pod container-probe-5901/liveness-c40b32d4-255c-42d4-a836-bbbc462fe56e is now 2 (32.140435182s elapsed)
Oct  6 20:23:22.600: INFO: Restart count of pod container-probe-5901/liveness-c40b32d4-255c-42d4-a836-bbbc462fe56e is now 3 (52.211798968s elapsed)
Oct  6 20:23:42.669: INFO: Restart count of pod container-probe-5901/liveness-c40b32d4-255c-42d4-a836-bbbc462fe56e is now 4 (1m12.281409432s elapsed)
Oct  6 20:24:55.195: INFO: Restart count of pod container-probe-5901/liveness-c40b32d4-255c-42d4-a836-bbbc462fe56e is now 5 (2m24.806746882s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 20:24:55.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-5901" for this suite.

• [SLOW TEST:149.280 seconds]
[k8s.io] Probing container
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":278,"completed":49,"skipped":860,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 20:24:55.246: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Oct  6 20:25:03.427: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Oct  6 20:25:03.440: INFO: Pod pod-with-prestop-http-hook still exists
Oct  6 20:25:05.441: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Oct  6 20:25:05.448: INFO: Pod pod-with-prestop-http-hook still exists
Oct  6 20:25:07.441: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Oct  6 20:25:07.446: INFO: Pod pod-with-prestop-http-hook still exists
Oct  6 20:25:09.441: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Oct  6 20:25:09.448: INFO: Pod pod-with-prestop-http-hook still exists
Oct  6 20:25:11.441: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Oct  6 20:25:11.446: INFO: Pod pod-with-prestop-http-hook still exists
Oct  6 20:25:13.441: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Oct  6 20:25:13.447: INFO: Pod pod-with-prestop-http-hook still exists
Oct  6 20:25:15.441: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Oct  6 20:25:15.447: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 20:25:15.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-1778" for this suite.

• [SLOW TEST:20.463 seconds]
[k8s.io] Container Lifecycle Hook
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when create a pod with lifecycle hook
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":278,"completed":50,"skipped":874,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 20:25:15.714: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Oct  6 20:25:17.645: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Oct  6 20:25:19.665: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737612717, loc:(*time.Location)(0x7271fa0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737612717, loc:(*time.Location)(0x7271fa0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737612717, loc:(*time.Location)(0x7271fa0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737612717, loc:(*time.Location)(0x7271fa0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Oct  6 20:25:21.673: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737612717, loc:(*time.Location)(0x7271fa0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737612717, loc:(*time.Location)(0x7271fa0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737612717, loc:(*time.Location)(0x7271fa0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737612717, loc:(*time.Location)(0x7271fa0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Oct  6 20:25:24.704: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API
STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API
STEP: Creating a dummy validating-webhook-configuration object
STEP: Deleting the validating-webhook-configuration, which should be possible to remove
STEP: Creating a dummy mutating-webhook-configuration object
STEP: Deleting the mutating-webhook-configuration, which should be possible to remove
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 20:25:24.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4566" for this suite.
STEP: Destroying namespace "webhook-4566-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:9.279 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":278,"completed":51,"skipped":904,"failed":0}
SSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run rc 
  should create an rc from an image [Deprecated] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 20:25:24.993: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Kubectl run rc
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1526
[It] should create an rc from an image [Deprecated] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Oct  6 20:25:25.080: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-5528'
Oct  6 20:25:26.448: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Oct  6 20:25:26.448: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n"
STEP: verifying the rc e2e-test-httpd-rc was created
STEP: verifying the pod controlled by rc e2e-test-httpd-rc was created
STEP: confirm that you can get logs from an rc
Oct  6 20:25:26.463: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-httpd-rc-z9v22]
Oct  6 20:25:26.463: INFO: Waiting up to 5m0s for pod "e2e-test-httpd-rc-z9v22" in namespace "kubectl-5528" to be "running and ready"
Oct  6 20:25:26.481: INFO: Pod "e2e-test-httpd-rc-z9v22": Phase="Pending", Reason="", readiness=false. Elapsed: 17.365683ms
Oct  6 20:25:28.618: INFO: Pod "e2e-test-httpd-rc-z9v22": Phase="Pending", Reason="", readiness=false. Elapsed: 2.154503148s
Oct  6 20:25:30.623: INFO: Pod "e2e-test-httpd-rc-z9v22": Phase="Running", Reason="", readiness=true. Elapsed: 4.159501251s
Oct  6 20:25:30.623: INFO: Pod "e2e-test-httpd-rc-z9v22" satisfied condition "running and ready"
Oct  6 20:25:30.624: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-httpd-rc-z9v22]
Oct  6 20:25:30.624: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-httpd-rc --namespace=kubectl-5528'
Oct  6 20:25:31.966: INFO: stderr: ""
Oct  6 20:25:31.966: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.1.6. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.1.6. Set the 'ServerName' directive globally to suppress this message\n[Tue Oct 06 20:25:29.040005 2020] [mpm_event:notice] [pid 1:tid 140282658638696] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Tue Oct 06 20:25:29.040052 2020] [core:notice] [pid 1:tid 140282658638696] AH00094: Command line: 'httpd -D FOREGROUND'\n"
[AfterEach] Kubectl run rc
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1531
Oct  6 20:25:31.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-5528'
Oct  6 20:25:33.234: INFO: stderr: ""
Oct  6 20:25:33.234: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 20:25:33.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5528" for this suite.

• [SLOW TEST:8.258 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run rc
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522
    should create an rc from an image [Deprecated] [Conformance]
    /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Deprecated] [Conformance]","total":278,"completed":52,"skipped":911,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 20:25:33.254: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Oct  6 20:25:33.551: INFO: Waiting up to 5m0s for pod "downwardapi-volume-740ec6b3-f07f-4d9e-9d8d-67b2eee67f5b" in namespace "projected-2391" to be "success or failure"
Oct  6 20:25:33.596: INFO: Pod "downwardapi-volume-740ec6b3-f07f-4d9e-9d8d-67b2eee67f5b": Phase="Pending", Reason="", readiness=false. Elapsed: 44.242128ms
Oct  6 20:25:35.603: INFO: Pod "downwardapi-volume-740ec6b3-f07f-4d9e-9d8d-67b2eee67f5b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051415812s
Oct  6 20:25:37.611: INFO: Pod "downwardapi-volume-740ec6b3-f07f-4d9e-9d8d-67b2eee67f5b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.059145508s
STEP: Saw pod success
Oct  6 20:25:37.611: INFO: Pod "downwardapi-volume-740ec6b3-f07f-4d9e-9d8d-67b2eee67f5b" satisfied condition "success or failure"
Oct  6 20:25:37.617: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-740ec6b3-f07f-4d9e-9d8d-67b2eee67f5b container client-container: 
STEP: delete the pod
Oct  6 20:25:37.673: INFO: Waiting for pod downwardapi-volume-740ec6b3-f07f-4d9e-9d8d-67b2eee67f5b to disappear
Oct  6 20:25:37.685: INFO: Pod downwardapi-volume-740ec6b3-f07f-4d9e-9d8d-67b2eee67f5b no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 20:25:37.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2391" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":278,"completed":53,"skipped":923,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period 
  should be submitted and removed [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 20:25:37.703: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Delete Grace Period
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:50
[It] should be submitted and removed [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: setting up selector
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
Oct  6 20:25:41.896: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0'
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
Oct  6 20:25:58.101: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 20:25:58.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5472" for this suite.

• [SLOW TEST:20.420 seconds]
[k8s.io] [sig-node] Pods Extended
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  [k8s.io] Delete Grace Period
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    should be submitted and removed [Conformance]
    /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance]","total":278,"completed":54,"skipped":957,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 20:25:58.129: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-upd-3b99f11a-5cca-4e14-a97b-facc85f7dbf5
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-3b99f11a-5cca-4e14-a97b-facc85f7dbf5
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 20:26:04.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8026" for this suite.

• [SLOW TEST:6.220 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":55,"skipped":1027,"failed":0}
S
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 20:26:04.350: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 20:26:19.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-2021" for this suite.
STEP: Destroying namespace "nsdeletetest-257" for this suite.
Oct  6 20:26:19.710: INFO: Namespace nsdeletetest-257 was already deleted
STEP: Destroying namespace "nsdeletetest-4211" for this suite.

• [SLOW TEST:15.366 seconds]
[sig-api-machinery] Namespaces [Serial]
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":278,"completed":56,"skipped":1028,"failed":0}
SSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 20:26:19.718: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-b7f107ae-4f36-4fdb-b17e-378c72937b4e
STEP: Creating a pod to test consume secrets
Oct  6 20:26:19.817: INFO: Waiting up to 5m0s for pod "pod-secrets-bcdd048d-4250-4e13-9610-4286cb54b588" in namespace "secrets-5990" to be "success or failure"
Oct  6 20:26:19.831: INFO: Pod "pod-secrets-bcdd048d-4250-4e13-9610-4286cb54b588": Phase="Pending", Reason="", readiness=false. Elapsed: 14.230087ms
Oct  6 20:26:21.838: INFO: Pod "pod-secrets-bcdd048d-4250-4e13-9610-4286cb54b588": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021323105s
Oct  6 20:26:23.847: INFO: Pod "pod-secrets-bcdd048d-4250-4e13-9610-4286cb54b588": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029586724s
STEP: Saw pod success
Oct  6 20:26:23.847: INFO: Pod "pod-secrets-bcdd048d-4250-4e13-9610-4286cb54b588" satisfied condition "success or failure"
Oct  6 20:26:23.852: INFO: Trying to get logs from node jerma-worker pod pod-secrets-bcdd048d-4250-4e13-9610-4286cb54b588 container secret-volume-test: 
STEP: delete the pod
Oct  6 20:26:23.872: INFO: Waiting for pod pod-secrets-bcdd048d-4250-4e13-9610-4286cb54b588 to disappear
Oct  6 20:26:23.876: INFO: Pod pod-secrets-bcdd048d-4250-4e13-9610-4286cb54b588 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 20:26:23.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5990" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":57,"skipped":1032,"failed":0}
SSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 20:26:23.887: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-df593fc3-2da7-4fd9-9aeb-3cad86a6e357
STEP: Creating a pod to test consume configMaps
Oct  6 20:26:23.982: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-5836777c-603f-4ca8-845b-76d4b64d8f7c" in namespace "projected-9113" to be "success or failure"
Oct  6 20:26:24.002: INFO: Pod "pod-projected-configmaps-5836777c-603f-4ca8-845b-76d4b64d8f7c": Phase="Pending", Reason="", readiness=false. Elapsed: 20.101871ms
Oct  6 20:26:26.025: INFO: Pod "pod-projected-configmaps-5836777c-603f-4ca8-845b-76d4b64d8f7c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043129789s
Oct  6 20:26:28.032: INFO: Pod "pod-projected-configmaps-5836777c-603f-4ca8-845b-76d4b64d8f7c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.050014112s
STEP: Saw pod success
Oct  6 20:26:28.032: INFO: Pod "pod-projected-configmaps-5836777c-603f-4ca8-845b-76d4b64d8f7c" satisfied condition "success or failure"
Oct  6 20:26:28.038: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-5836777c-603f-4ca8-845b-76d4b64d8f7c container projected-configmap-volume-test: 
STEP: delete the pod
Oct  6 20:26:28.071: INFO: Waiting for pod pod-projected-configmaps-5836777c-603f-4ca8-845b-76d4b64d8f7c to disappear
Oct  6 20:26:28.075: INFO: Pod pod-projected-configmaps-5836777c-603f-4ca8-845b-76d4b64d8f7c no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 20:26:28.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9113" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":58,"skipped":1038,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 20:26:28.093: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name projected-secret-test-map-9c5c50a8-66c0-45fa-bdef-56018f1228c0
STEP: Creating a pod to test consume secrets
Oct  6 20:26:28.198: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-8ea3111f-ff1d-401e-b3c8-599a5f9bfe86" in namespace "projected-4385" to be "success or failure"
Oct  6 20:26:28.230: INFO: Pod "pod-projected-secrets-8ea3111f-ff1d-401e-b3c8-599a5f9bfe86": Phase="Pending", Reason="", readiness=false. Elapsed: 32.439785ms
Oct  6 20:26:30.252: INFO: Pod "pod-projected-secrets-8ea3111f-ff1d-401e-b3c8-599a5f9bfe86": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053802964s
Oct  6 20:26:32.259: INFO: Pod "pod-projected-secrets-8ea3111f-ff1d-401e-b3c8-599a5f9bfe86": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.06090148s
STEP: Saw pod success
Oct  6 20:26:32.259: INFO: Pod "pod-projected-secrets-8ea3111f-ff1d-401e-b3c8-599a5f9bfe86" satisfied condition "success or failure"
Oct  6 20:26:32.264: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-8ea3111f-ff1d-401e-b3c8-599a5f9bfe86 container projected-secret-volume-test: 
STEP: delete the pod
Oct  6 20:26:32.307: INFO: Waiting for pod pod-projected-secrets-8ea3111f-ff1d-401e-b3c8-599a5f9bfe86 to disappear
Oct  6 20:26:32.311: INFO: Pod pod-projected-secrets-8ea3111f-ff1d-401e-b3c8-599a5f9bfe86 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 20:26:32.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4385" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":59,"skipped":1060,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 20:26:32.327: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide container's memory request [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Oct  6 20:26:32.395: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bfd3cd49-2ab3-4c34-b8d3-604332a4f4be" in namespace "projected-5339" to be "success or failure"
Oct  6 20:26:32.446: INFO: Pod "downwardapi-volume-bfd3cd49-2ab3-4c34-b8d3-604332a4f4be": Phase="Pending", Reason="", readiness=false. Elapsed: 51.26513ms
Oct  6 20:26:34.454: INFO: Pod "downwardapi-volume-bfd3cd49-2ab3-4c34-b8d3-604332a4f4be": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058677058s
Oct  6 20:26:36.468: INFO: Pod "downwardapi-volume-bfd3cd49-2ab3-4c34-b8d3-604332a4f4be": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.073033173s
STEP: Saw pod success
Oct  6 20:26:36.468: INFO: Pod "downwardapi-volume-bfd3cd49-2ab3-4c34-b8d3-604332a4f4be" satisfied condition "success or failure"
Oct  6 20:26:36.473: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-bfd3cd49-2ab3-4c34-b8d3-604332a4f4be container client-container: 
STEP: delete the pod
Oct  6 20:26:36.491: INFO: Waiting for pod downwardapi-volume-bfd3cd49-2ab3-4c34-b8d3-604332a4f4be to disappear
Oct  6 20:26:36.494: INFO: Pod downwardapi-volume-bfd3cd49-2ab3-4c34-b8d3-604332a4f4be no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 20:26:36.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5339" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":60,"skipped":1081,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should unconditionally reject operations on fail closed webhook [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 20:26:36.507: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Oct  6 20:26:39.566: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Oct  6 20:26:41.621: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737612799, loc:(*time.Location)(0x7271fa0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737612799, loc:(*time.Location)(0x7271fa0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737612799, loc:(*time.Location)(0x7271fa0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737612799, loc:(*time.Location)(0x7271fa0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Oct  6 20:26:44.660: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should unconditionally reject operations on fail closed webhook [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API
STEP: create a namespace for the webhook
STEP: create a configmap should be unconditionally rejected by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 20:26:44.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-8963" for this suite.
STEP: Destroying namespace "webhook-8963-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:8.375 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should unconditionally reject operations on fail closed webhook [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":278,"completed":61,"skipped":1094,"failed":0}
SSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 20:26:44.883: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Oct  6 20:26:53.041: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Oct  6 20:26:53.066: INFO: Pod pod-with-poststart-exec-hook still exists
Oct  6 20:26:55.067: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Oct  6 20:26:55.074: INFO: Pod pod-with-poststart-exec-hook still exists
Oct  6 20:26:57.067: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Oct  6 20:26:57.073: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 20:26:57.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-6592" for this suite.

• [SLOW TEST:12.207 seconds]
[k8s.io] Container Lifecycle Hook
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when create a pod with lifecycle hook
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":278,"completed":62,"skipped":1099,"failed":0}
SS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 20:26:57.092: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating the pod
Oct  6 20:27:01.769: INFO: Successfully updated pod "labelsupdatee894a287-aaef-4d2f-8bf9-1714f0bdd5b5"
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 20:27:03.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2070" for this suite.

• [SLOW TEST:6.720 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":63,"skipped":1101,"failed":0}
S
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 20:27:03.813: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Oct  6 20:27:11.450: INFO: 10 pods remaining
Oct  6 20:27:11.451: INFO: 1 pods has nil DeletionTimestamp
Oct  6 20:27:11.451: INFO: 
Oct  6 20:27:12.193: INFO: 0 pods remaining
Oct  6 20:27:12.193: INFO: 0 pods has nil DeletionTimestamp
Oct  6 20:27:12.193: INFO: 
Oct  6 20:27:13.673: INFO: 0 pods remaining
Oct  6 20:27:13.673: INFO: 0 pods has nil DeletionTimestamp
Oct  6 20:27:13.673: INFO: 
STEP: Gathering metrics
W1006 20:27:15.265408       7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Oct  6 20:27:15.265: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 20:27:15.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-7480" for this suite.

• [SLOW TEST:11.473 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":278,"completed":64,"skipped":1102,"failed":0}
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 20:27:15.287: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0666 on node default medium
Oct  6 20:27:15.803: INFO: Waiting up to 5m0s for pod "pod-58d5cddd-a7a5-4cb0-a379-00cda116ec50" in namespace "emptydir-8180" to be "success or failure"
Oct  6 20:27:16.062: INFO: Pod "pod-58d5cddd-a7a5-4cb0-a379-00cda116ec50": Phase="Pending", Reason="", readiness=false. Elapsed: 258.9759ms
Oct  6 20:27:18.079: INFO: Pod "pod-58d5cddd-a7a5-4cb0-a379-00cda116ec50": Phase="Pending", Reason="", readiness=false. Elapsed: 2.2761338s
Oct  6 20:27:20.086: INFO: Pod "pod-58d5cddd-a7a5-4cb0-a379-00cda116ec50": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.282764626s
STEP: Saw pod success
Oct  6 20:27:20.086: INFO: Pod "pod-58d5cddd-a7a5-4cb0-a379-00cda116ec50" satisfied condition "success or failure"
Oct  6 20:27:20.090: INFO: Trying to get logs from node jerma-worker pod pod-58d5cddd-a7a5-4cb0-a379-00cda116ec50 container test-container: 
STEP: delete the pod
Oct  6 20:27:20.163: INFO: Waiting for pod pod-58d5cddd-a7a5-4cb0-a379-00cda116ec50 to disappear
Oct  6 20:27:20.167: INFO: Pod pod-58d5cddd-a7a5-4cb0-a379-00cda116ec50 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 20:27:20.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8180" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":65,"skipped":1102,"failed":0}
SSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 20:27:20.194: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Oct  6 20:27:20.265: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a5ca46a6-4390-4147-a6a8-0092839a179c" in namespace "projected-5526" to be "success or failure"
Oct  6 20:27:20.274: INFO: Pod "downwardapi-volume-a5ca46a6-4390-4147-a6a8-0092839a179c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.702045ms
Oct  6 20:27:22.282: INFO: Pod "downwardapi-volume-a5ca46a6-4390-4147-a6a8-0092839a179c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01625865s
Oct  6 20:27:24.290: INFO: Pod "downwardapi-volume-a5ca46a6-4390-4147-a6a8-0092839a179c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024394081s
STEP: Saw pod success
Oct  6 20:27:24.290: INFO: Pod "downwardapi-volume-a5ca46a6-4390-4147-a6a8-0092839a179c" satisfied condition "success or failure"
Oct  6 20:27:24.295: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-a5ca46a6-4390-4147-a6a8-0092839a179c container client-container: 
STEP: delete the pod
Oct  6 20:27:24.542: INFO: Waiting for pod downwardapi-volume-a5ca46a6-4390-4147-a6a8-0092839a179c to disappear
Oct  6 20:27:24.547: INFO: Pod downwardapi-volume-a5ca46a6-4390-4147-a6a8-0092839a179c no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 20:27:24.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5526" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":66,"skipped":1107,"failed":0}
SSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 20:27:24.561: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Oct  6 20:27:24.633: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 20:27:28.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-894" for this suite.
•{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":278,"completed":67,"skipped":1119,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Networking
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 20:27:28.875: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Performing setup for networking test in namespace pod-network-test-8945
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Oct  6 20:27:28.969: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Oct  6 20:27:51.166: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.171:8080/dial?request=hostname&protocol=http&host=10.244.2.170&port=8080&tries=1'] Namespace:pod-network-test-8945 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Oct  6 20:27:51.166: INFO: >>> kubeConfig: /root/.kube/config
I1006 20:27:51.232680       7 log.go:172] (0x4005a16420) (0x40004365a0) Create stream
I1006 20:27:51.232899       7 log.go:172] (0x4005a16420) (0x40004365a0) Stream added, broadcasting: 1
I1006 20:27:51.236702       7 log.go:172] (0x4005a16420) Reply frame received for 1
I1006 20:27:51.236894       7 log.go:172] (0x4005a16420) (0x40027fd9a0) Create stream
I1006 20:27:51.236964       7 log.go:172] (0x4005a16420) (0x40027fd9a0) Stream added, broadcasting: 3
I1006 20:27:51.238749       7 log.go:172] (0x4005a16420) Reply frame received for 3
I1006 20:27:51.238949       7 log.go:172] (0x4005a16420) (0x40004366e0) Create stream
I1006 20:27:51.239046       7 log.go:172] (0x4005a16420) (0x40004366e0) Stream added, broadcasting: 5
I1006 20:27:51.241013       7 log.go:172] (0x4005a16420) Reply frame received for 5
I1006 20:27:51.325270       7 log.go:172] (0x4005a16420) Data frame received for 3
I1006 20:27:51.325408       7 log.go:172] (0x40027fd9a0) (3) Data frame handling
I1006 20:27:51.325504       7 log.go:172] (0x40027fd9a0) (3) Data frame sent
I1006 20:27:51.325608       7 log.go:172] (0x4005a16420) Data frame received for 3
I1006 20:27:51.325712       7 log.go:172] (0x40027fd9a0) (3) Data frame handling
I1006 20:27:51.326078       7 log.go:172] (0x4005a16420) Data frame received for 5
I1006 20:27:51.326330       7 log.go:172] (0x40004366e0) (5) Data frame handling
I1006 20:27:51.327743       7 log.go:172] (0x4005a16420) Data frame received for 1
I1006 20:27:51.327899       7 log.go:172] (0x40004365a0) (1) Data frame handling
I1006 20:27:51.328074       7 log.go:172] (0x40004365a0) (1) Data frame sent
I1006 20:27:51.328206       7 log.go:172] (0x4005a16420) (0x40004365a0) Stream removed, broadcasting: 1
I1006 20:27:51.328379       7 log.go:172] (0x4005a16420) Go away received
I1006 20:27:51.328806       7 log.go:172] (0x4005a16420) (0x40004365a0) Stream removed, broadcasting: 1
I1006 20:27:51.329104       7 log.go:172] (0x4005a16420) (0x40027fd9a0) Stream removed, broadcasting: 3
I1006 20:27:51.329237       7 log.go:172] (0x4005a16420) (0x40004366e0) Stream removed, broadcasting: 5
Oct  6 20:27:51.330: INFO: Waiting for responses: map[]
Oct  6 20:27:51.337: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.171:8080/dial?request=hostname&protocol=http&host=10.244.1.18&port=8080&tries=1'] Namespace:pod-network-test-8945 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Oct  6 20:27:51.337: INFO: >>> kubeConfig: /root/.kube/config
I1006 20:27:51.397269       7 log.go:172] (0x4002cb4a50) (0x40027fdd60) Create stream
I1006 20:27:51.397411       7 log.go:172] (0x4002cb4a50) (0x40027fdd60) Stream added, broadcasting: 1
I1006 20:27:51.401485       7 log.go:172] (0x4002cb4a50) Reply frame received for 1
I1006 20:27:51.401762       7 log.go:172] (0x4002cb4a50) (0x4000315cc0) Create stream
I1006 20:27:51.401895       7 log.go:172] (0x4002cb4a50) (0x4000315cc0) Stream added, broadcasting: 3
I1006 20:27:51.403727       7 log.go:172] (0x4002cb4a50) Reply frame received for 3
I1006 20:27:51.403910       7 log.go:172] (0x4002cb4a50) (0x40027fdea0) Create stream
I1006 20:27:51.403986       7 log.go:172] (0x4002cb4a50) (0x40027fdea0) Stream added, broadcasting: 5
I1006 20:27:51.405435       7 log.go:172] (0x4002cb4a50) Reply frame received for 5
I1006 20:27:51.466080       7 log.go:172] (0x4002cb4a50) Data frame received for 3
I1006 20:27:51.466198       7 log.go:172] (0x4000315cc0) (3) Data frame handling
I1006 20:27:51.466288       7 log.go:172] (0x4000315cc0) (3) Data frame sent
I1006 20:27:51.467146       7 log.go:172] (0x4002cb4a50) Data frame received for 5
I1006 20:27:51.467272       7 log.go:172] (0x40027fdea0) (5) Data frame handling
I1006 20:27:51.467439       7 log.go:172] (0x4002cb4a50) Data frame received for 3
I1006 20:27:51.467602       7 log.go:172] (0x4000315cc0) (3) Data frame handling
I1006 20:27:51.469001       7 log.go:172] (0x4002cb4a50) Data frame received for 1
I1006 20:27:51.469135       7 log.go:172] (0x40027fdd60) (1) Data frame handling
I1006 20:27:51.469268       7 log.go:172] (0x40027fdd60) (1) Data frame sent
I1006 20:27:51.469604       7 log.go:172] (0x4002cb4a50) (0x40027fdd60) Stream removed, broadcasting: 1
I1006 20:27:51.469770       7 log.go:172] (0x4002cb4a50) Go away received
I1006 20:27:51.470002       7 log.go:172] (0x4002cb4a50) (0x40027fdd60) Stream removed, broadcasting: 1
I1006 20:27:51.470102       7 log.go:172] (0x4002cb4a50) (0x4000315cc0) Stream removed, broadcasting: 3
I1006 20:27:51.470179       7 log.go:172] (0x4002cb4a50) (0x40027fdea0) Stream removed, broadcasting: 5
Oct  6 20:27:51.470: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 20:27:51.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-8945" for this suite.

• [SLOW TEST:22.611 seconds]
[sig-network] Networking
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":68,"skipped":1152,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 20:27:51.489: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0777 on tmpfs
Oct  6 20:27:51.572: INFO: Waiting up to 5m0s for pod "pod-0b24ed4e-3f49-4592-9ee5-c4833d35f37d" in namespace "emptydir-4790" to be "success or failure"
Oct  6 20:27:51.602: INFO: Pod "pod-0b24ed4e-3f49-4592-9ee5-c4833d35f37d": Phase="Pending", Reason="", readiness=false. Elapsed: 30.72339ms
Oct  6 20:27:53.609: INFO: Pod "pod-0b24ed4e-3f49-4592-9ee5-c4833d35f37d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037589879s
Oct  6 20:27:55.617: INFO: Pod "pod-0b24ed4e-3f49-4592-9ee5-c4833d35f37d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.045144207s
STEP: Saw pod success
Oct  6 20:27:55.617: INFO: Pod "pod-0b24ed4e-3f49-4592-9ee5-c4833d35f37d" satisfied condition "success or failure"
Oct  6 20:27:55.622: INFO: Trying to get logs from node jerma-worker pod pod-0b24ed4e-3f49-4592-9ee5-c4833d35f37d container test-container: 
STEP: delete the pod
Oct  6 20:27:55.649: INFO: Waiting for pod pod-0b24ed4e-3f49-4592-9ee5-c4833d35f37d to disappear
Oct  6 20:27:55.780: INFO: Pod pod-0b24ed4e-3f49-4592-9ee5-c4833d35f37d no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 20:27:55.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4790" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":69,"skipped":1177,"failed":0}
SSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 20:27:55.797: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 20:28:28.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-4329" for this suite.

• [SLOW TEST:32.300 seconds]
[k8s.io] Container Runtime
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  blackbox test
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    when starting a container that exits
    /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39
      should run with the expected status [NodeConformance] [Conformance]
      /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":278,"completed":70,"skipped":1190,"failed":0}
SSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 20:28:28.099: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Oct  6 20:28:28.165: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6846'
Oct  6 20:28:32.608: INFO: stderr: ""
Oct  6 20:28:32.608: INFO: stdout: "replicationcontroller/agnhost-master created\n"
Oct  6 20:28:32.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6846'
Oct  6 20:28:34.453: INFO: stderr: ""
Oct  6 20:28:34.453: INFO: stdout: "service/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Oct  6 20:28:35.464: INFO: Selector matched 1 pods for map[app:agnhost]
Oct  6 20:28:35.466: INFO: Found 0 / 1
Oct  6 20:28:36.463: INFO: Selector matched 1 pods for map[app:agnhost]
Oct  6 20:28:36.463: INFO: Found 1 / 1
Oct  6 20:28:36.464: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Oct  6 20:28:36.469: INFO: Selector matched 1 pods for map[app:agnhost]
Oct  6 20:28:36.469: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Oct  6 20:28:36.470: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod agnhost-master-j6p68 --namespace=kubectl-6846'
Oct  6 20:28:37.813: INFO: stderr: ""
Oct  6 20:28:37.814: INFO: stdout: "Name:         agnhost-master-j6p68\nNamespace:    kubectl-6846\nPriority:     0\nNode:         jerma-worker2/172.18.0.10\nStart Time:   Tue, 06 Oct 2020 20:28:32 +0000\nLabels:       app=agnhost\n              role=master\nAnnotations:  \nStatus:       Running\nIP:           10.244.1.20\nIPs:\n  IP:           10.244.1.20\nControlled By:  ReplicationController/agnhost-master\nContainers:\n  agnhost-master:\n    Container ID:   containerd://4a99d90dcd42a827494812882f56b056ab07fda734a6a11826d7e7c8d0fa8e2b\n    Image:          gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n    Image ID:       gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Tue, 06 Oct 2020 20:28:35 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-m5nx9 (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-m5nx9:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-m5nx9\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age        From                    Message\n  ----    ------     ----       ----                    -------\n  Normal  Scheduled    default-scheduler       Successfully assigned kubectl-6846/agnhost-master-j6p68 to jerma-worker2\n  Normal  Pulled     4s         kubelet, jerma-worker2  Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\n  Normal  Created    2s         kubelet, jerma-worker2  Created container agnhost-master\n  Normal  Started    2s         kubelet, jerma-worker2  Started container agnhost-master\n"
Oct  6 20:28:37.816: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-6846'
Oct  6 20:28:39.194: INFO: stderr: ""
Oct  6 20:28:39.194: INFO: stdout: "Name:         agnhost-master\nNamespace:    kubectl-6846\nSelector:     app=agnhost,role=master\nLabels:       app=agnhost\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=agnhost\n           role=master\n  Containers:\n   agnhost-master:\n    Image:        gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  7s    replication-controller  Created pod: agnhost-master-j6p68\n"
Oct  6 20:28:39.195: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-6846'
Oct  6 20:28:40.520: INFO: stderr: ""
Oct  6 20:28:40.520: INFO: stdout: "Name:              agnhost-master\nNamespace:         kubectl-6846\nLabels:            app=agnhost\n                   role=master\nAnnotations:       \nSelector:          app=agnhost,role=master\nType:              ClusterIP\nIP:                10.100.139.123\nPort:                6379/TCP\nTargetPort:        agnhost-server/TCP\nEndpoints:         10.244.1.20:6379\nSession Affinity:  None\nEvents:            \n"
Oct  6 20:28:40.529: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node jerma-control-plane'
Oct  6 20:28:41.914: INFO: stderr: ""
Oct  6 20:28:41.914: INFO: stdout: "Name:               jerma-control-plane\nRoles:              master\nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=jerma-control-plane\n                    kubernetes.io/os=linux\n                    node-role.kubernetes.io/master=\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Wed, 23 Sep 2020 08:26:58 +0000\nTaints:             node-role.kubernetes.io/master:NoSchedule\nUnschedulable:      false\nLease:\n  HolderIdentity:  jerma-control-plane\n  AcquireTime:     \n  RenewTime:       Tue, 06 Oct 2020 20:28:40 +0000\nConditions:\n  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----             ------  -----------------                 ------------------                ------                       -------\n  MemoryPressure   False   Tue, 06 Oct 2020 20:24:36 +0000   Wed, 23 Sep 2020 08:26:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure     False   Tue, 06 Oct 2020 20:24:36 +0000   Wed, 23 Sep 2020 08:26:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure      False   Tue, 06 Oct 2020 20:24:36 +0000   Wed, 23 Sep 2020 08:26:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready            True    Tue, 06 Oct 2020 20:24:36 +0000   Wed, 23 Sep 2020 08:27:23 +0000   KubeletReady                 kubelet is posting ready status\nAddresses:\n  InternalIP:  172.18.0.8\n  Hostname:    jerma-control-plane\nCapacity:\n  cpu:                16\n  ephemeral-storage:  2303189964Ki\n  hugepages-1Gi:      0\n  hugepages-2Mi:      0\n  memory:             131759868Ki\n  pods:               110\nAllocatable:\n  cpu:                16\n  ephemeral-storage:  2303189964Ki\n  hugepages-1Gi:      0\n  hugepages-2Mi:      0\n  memory:             131759868Ki\n  pods:               110\nSystem Info:\n  Machine ID:                 fe2aca8844154d87b6440058d7a6a967\n  System UUID:                dfffb871-82a7-49e8-b93c-4170ac55bd08\n  Boot ID:                    b267d78b-f69b-4338-80e8-3f4944338e5d\n  Kernel Version:             4.15.0-118-generic\n  OS Image:                   Ubuntu 19.10\n  Operating System:           linux\n  Architecture:               amd64\n  Container Runtime Version:  containerd://1.3.3-14-g449e9269\n  Kubelet Version:            v1.17.5\n  Kube-Proxy Version:         v1.17.5\nPodCIDR:                      10.244.0.0/24\nPodCIDRs:                     10.244.0.0/24\nNon-terminated Pods:          (9 in total)\n  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---\n  kube-system                 coredns-6955765f44-7bd2n                       100m (0%)     0 (0%)      70Mi (0%)        170Mi (0%)     13d\n  kube-system                 coredns-6955765f44-bxgn5                       100m (0%)     0 (0%)      70Mi (0%)        170Mi (0%)     13d\n  kube-system                 etcd-jerma-control-plane                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         13d\n  kube-system                 kindnet-cv4pq                                  100m (0%)     100m (0%)   50Mi (0%)        50Mi (0%)      13d\n  kube-system                 kube-apiserver-jerma-control-plane             250m (1%)     0 (0%)      0 (0%)           0 (0%)         13d\n  kube-system                 kube-controller-manager-jerma-control-plane    200m (1%)     0 (0%)      0 (0%)           0 (0%)         13d\n  kube-system                 kube-proxy-vr8mk                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         13d\n  kube-system                 kube-scheduler-jerma-control-plane             100m (0%)     0 (0%)      0 (0%)           0 (0%)         13d\n  local-path-storage          local-path-provisioner-58f6947c7-wgwst         0 (0%)        0 (0%)      0 (0%)           0 (0%)         13d\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests    Limits\n  --------           --------    ------\n  cpu                850m (5%)   100m (0%)\n  memory             190Mi (0%)  390Mi (0%)\n  ephemeral-storage  0 (0%)      0 (0%)\nEvents:              \n"
Oct  6 20:28:41.916: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-6846'
Oct  6 20:28:43.181: INFO: stderr: ""
Oct  6 20:28:43.181: INFO: stdout: "Name:         kubectl-6846\nLabels:       e2e-framework=kubectl\n              e2e-run=c6e194a1-7168-4703-bc60-030734409460\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nNo LimitRange resource.\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 20:28:43.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6846" for this suite.

• [SLOW TEST:15.097 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl describe
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1048
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods  [Conformance]","total":278,"completed":71,"skipped":1200,"failed":0}
SS
------------------------------
[sig-api-machinery] Servers with support for Table transformation 
  should return a 406 for a backend which does not implement metadata [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 20:28:43.197: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename tables
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:46
[It] should return a 406 for a backend which does not implement metadata [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [sig-api-machinery] Servers with support for Table transformation
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 20:28:43.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-3204" for this suite.
•{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":278,"completed":72,"skipped":1202,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Docker Containers
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 20:28:43.342: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test override all
Oct  6 20:28:43.516: INFO: Waiting up to 5m0s for pod "client-containers-553357e9-2eba-4233-bcd8-2b5b3fa82668" in namespace "containers-2568" to be "success or failure"
Oct  6 20:28:43.524: INFO: Pod "client-containers-553357e9-2eba-4233-bcd8-2b5b3fa82668": Phase="Pending", Reason="", readiness=false. Elapsed: 7.40225ms
Oct  6 20:28:45.759: INFO: Pod "client-containers-553357e9-2eba-4233-bcd8-2b5b3fa82668": Phase="Pending", Reason="", readiness=false. Elapsed: 2.242560108s
Oct  6 20:28:47.766: INFO: Pod "client-containers-553357e9-2eba-4233-bcd8-2b5b3fa82668": Phase="Running", Reason="", readiness=true. Elapsed: 4.249442201s
Oct  6 20:28:49.773: INFO: Pod "client-containers-553357e9-2eba-4233-bcd8-2b5b3fa82668": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.257089148s
STEP: Saw pod success
Oct  6 20:28:49.774: INFO: Pod "client-containers-553357e9-2eba-4233-bcd8-2b5b3fa82668" satisfied condition "success or failure"
Oct  6 20:28:49.785: INFO: Trying to get logs from node jerma-worker2 pod client-containers-553357e9-2eba-4233-bcd8-2b5b3fa82668 container test-container: 
STEP: delete the pod
Oct  6 20:28:49.803: INFO: Waiting for pod client-containers-553357e9-2eba-4233-bcd8-2b5b3fa82668 to disappear
Oct  6 20:28:49.808: INFO: Pod client-containers-553357e9-2eba-4233-bcd8-2b5b3fa82668 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 20:28:49.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-2568" for this suite.

• [SLOW TEST:6.481 seconds]
[k8s.io] Docker Containers
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":278,"completed":73,"skipped":1231,"failed":0}
SSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 20:28:49.825: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD preserving unknown fields in an embedded object [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Oct  6 20:28:49.914: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Oct  6 20:29:08.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4942 create -f -'
Oct  6 20:29:13.373: INFO: stderr: ""
Oct  6 20:29:13.373: INFO: stdout: "e2e-test-crd-publish-openapi-190-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n"
Oct  6 20:29:13.374: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4942 delete e2e-test-crd-publish-openapi-190-crds test-cr'
Oct  6 20:29:14.634: INFO: stderr: ""
Oct  6 20:29:14.634: INFO: stdout: "e2e-test-crd-publish-openapi-190-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n"
Oct  6 20:29:14.635: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4942 apply -f -'
Oct  6 20:29:16.286: INFO: stderr: ""
Oct  6 20:29:16.286: INFO: stdout: "e2e-test-crd-publish-openapi-190-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n"
Oct  6 20:29:16.287: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4942 delete e2e-test-crd-publish-openapi-190-crds test-cr'
Oct  6 20:29:17.551: INFO: stderr: ""
Oct  6 20:29:17.551: INFO: stdout: "e2e-test-crd-publish-openapi-190-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR
Oct  6 20:29:17.552: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-190-crds'
Oct  6 20:29:19.151: INFO: stderr: ""
Oct  6 20:29:19.151: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-190-crd\nVERSION:  crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n     preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n   apiVersion\t\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t\n     Specification of Waldo\n\n   status\t\n     Status of Waldo\n\n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 20:29:37.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-4942" for this suite.

• [SLOW TEST:48.131 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":278,"completed":74,"skipped":1235,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 20:29:37.958: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-map-0bf21550-6045-4d0d-ad59-5087b44086ca
STEP: Creating a pod to test consume configMaps
Oct  6 20:29:38.036: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e96e0376-b051-4d37-a1a2-dd348da74c18" in namespace "projected-796" to be "success or failure"
Oct  6 20:29:38.051: INFO: Pod "pod-projected-configmaps-e96e0376-b051-4d37-a1a2-dd348da74c18": Phase="Pending", Reason="", readiness=false. Elapsed: 14.968586ms
Oct  6 20:29:40.058: INFO: Pod "pod-projected-configmaps-e96e0376-b051-4d37-a1a2-dd348da74c18": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022247725s
Oct  6 20:29:42.076: INFO: Pod "pod-projected-configmaps-e96e0376-b051-4d37-a1a2-dd348da74c18": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040293873s
STEP: Saw pod success
Oct  6 20:29:42.077: INFO: Pod "pod-projected-configmaps-e96e0376-b051-4d37-a1a2-dd348da74c18" satisfied condition "success or failure"
Oct  6 20:29:42.082: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-e96e0376-b051-4d37-a1a2-dd348da74c18 container projected-configmap-volume-test: 
STEP: delete the pod
Oct  6 20:29:42.117: INFO: Waiting for pod pod-projected-configmaps-e96e0376-b051-4d37-a1a2-dd348da74c18 to disappear
Oct  6 20:29:42.121: INFO: Pod pod-projected-configmaps-e96e0376-b051-4d37-a1a2-dd348da74c18 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 20:29:42.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-796" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":278,"completed":75,"skipped":1286,"failed":0}
SSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 20:29:42.134: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Oct  6 20:29:42.294: INFO: Waiting up to 5m0s for pod "downwardapi-volume-34ac45b2-1c14-4008-942c-0071a6f14581" in namespace "projected-6487" to be "success or failure"
Oct  6 20:29:42.342: INFO: Pod "downwardapi-volume-34ac45b2-1c14-4008-942c-0071a6f14581": Phase="Pending", Reason="", readiness=false. Elapsed: 47.737711ms
Oct  6 20:29:44.441: INFO: Pod "downwardapi-volume-34ac45b2-1c14-4008-942c-0071a6f14581": Phase="Pending", Reason="", readiness=false. Elapsed: 2.146767714s
Oct  6 20:29:46.449: INFO: Pod "downwardapi-volume-34ac45b2-1c14-4008-942c-0071a6f14581": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.154258918s
STEP: Saw pod success
Oct  6 20:29:46.449: INFO: Pod "downwardapi-volume-34ac45b2-1c14-4008-942c-0071a6f14581" satisfied condition "success or failure"
Oct  6 20:29:46.454: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-34ac45b2-1c14-4008-942c-0071a6f14581 container client-container: 
STEP: delete the pod
Oct  6 20:29:46.504: INFO: Waiting for pod downwardapi-volume-34ac45b2-1c14-4008-942c-0071a6f14581 to disappear
Oct  6 20:29:46.510: INFO: Pod downwardapi-volume-34ac45b2-1c14-4008-942c-0071a6f14581 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 20:29:46.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6487" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":76,"skipped":1293,"failed":0}
SS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 20:29:46.524: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should run and stop complex daemon [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Oct  6 20:29:46.618: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Oct  6 20:29:46.652: INFO: Number of nodes with available pods: 0
Oct  6 20:29:46.652: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Oct  6 20:29:46.727: INFO: Number of nodes with available pods: 0
Oct  6 20:29:46.727: INFO: Node jerma-worker2 is running more than one daemon pod
Oct  6 20:29:47.735: INFO: Number of nodes with available pods: 0
Oct  6 20:29:47.735: INFO: Node jerma-worker2 is running more than one daemon pod
Oct  6 20:29:48.735: INFO: Number of nodes with available pods: 0
Oct  6 20:29:48.735: INFO: Node jerma-worker2 is running more than one daemon pod
Oct  6 20:29:49.733: INFO: Number of nodes with available pods: 0
Oct  6 20:29:49.734: INFO: Node jerma-worker2 is running more than one daemon pod
Oct  6 20:29:50.765: INFO: Number of nodes with available pods: 1
Oct  6 20:29:50.765: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Oct  6 20:29:50.815: INFO: Number of nodes with available pods: 1
Oct  6 20:29:50.815: INFO: Number of running nodes: 0, number of available pods: 1
Oct  6 20:29:51.821: INFO: Number of nodes with available pods: 0
Oct  6 20:29:51.821: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Oct  6 20:29:51.843: INFO: Number of nodes with available pods: 0
Oct  6 20:29:51.843: INFO: Node jerma-worker2 is running more than one daemon pod
Oct  6 20:29:52.850: INFO: Number of nodes with available pods: 0
Oct  6 20:29:52.850: INFO: Node jerma-worker2 is running more than one daemon pod
Oct  6 20:29:53.850: INFO: Number of nodes with available pods: 0
Oct  6 20:29:53.850: INFO: Node jerma-worker2 is running more than one daemon pod
Oct  6 20:29:54.850: INFO: Number of nodes with available pods: 0
Oct  6 20:29:54.851: INFO: Node jerma-worker2 is running more than one daemon pod
Oct  6 20:29:55.850: INFO: Number of nodes with available pods: 0
Oct  6 20:29:55.850: INFO: Node jerma-worker2 is running more than one daemon pod
Oct  6 20:29:56.872: INFO: Number of nodes with available pods: 0
Oct  6 20:29:56.872: INFO: Node jerma-worker2 is running more than one daemon pod
Oct  6 20:29:57.855: INFO: Number of nodes with available pods: 1
Oct  6 20:29:57.855: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5868, will wait for the garbage collector to delete the pods
Oct  6 20:29:57.927: INFO: Deleting DaemonSet.extensions daemon-set took: 10.096461ms
Oct  6 20:29:58.229: INFO: Terminating DaemonSet.extensions daemon-set pods took: 301.808979ms
Oct  6 20:30:03.834: INFO: Number of nodes with available pods: 0
Oct  6 20:30:03.835: INFO: Number of running nodes: 0, number of available pods: 0
Oct  6 20:30:03.841: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5868/daemonsets","resourceVersion":"3602882"},"items":null}

Oct  6 20:30:03.845: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5868/pods","resourceVersion":"3602882"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 20:30:03.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-5868" for this suite.

• [SLOW TEST:17.391 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop complex daemon [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":278,"completed":77,"skipped":1295,"failed":0}
SSSSS
------------------------------
[sig-network] DNS 
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 20:30:03.915: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-304 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-304;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-304 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-304;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-304.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-304.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-304.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-304.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-304.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-304.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-304.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-304.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-304.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-304.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-304.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-304.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-304.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 208.164.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.164.208_udp@PTR;check="$$(dig +tcp +noall +answer +search 208.164.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.164.208_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-304 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-304;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-304 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-304;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-304.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-304.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-304.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-304.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-304.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-304.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-304.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-304.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-304.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-304.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-304.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-304.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-304.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 208.164.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.164.208_udp@PTR;check="$$(dig +tcp +noall +answer +search 208.164.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.164.208_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Oct  6 20:30:10.099: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-304/dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72: the server could not find the requested resource (get pods dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72)
Oct  6 20:30:10.106: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-304/dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72: the server could not find the requested resource (get pods dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72)
Oct  6 20:30:10.110: INFO: Unable to read wheezy_udp@dns-test-service.dns-304 from pod dns-304/dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72: the server could not find the requested resource (get pods dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72)
Oct  6 20:30:10.114: INFO: Unable to read wheezy_tcp@dns-test-service.dns-304 from pod dns-304/dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72: the server could not find the requested resource (get pods dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72)
Oct  6 20:30:10.119: INFO: Unable to read wheezy_udp@dns-test-service.dns-304.svc from pod dns-304/dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72: the server could not find the requested resource (get pods dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72)
Oct  6 20:30:10.123: INFO: Unable to read wheezy_tcp@dns-test-service.dns-304.svc from pod dns-304/dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72: the server could not find the requested resource (get pods dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72)
Oct  6 20:30:10.128: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-304.svc from pod dns-304/dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72: the server could not find the requested resource (get pods dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72)
Oct  6 20:30:10.133: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-304.svc from pod dns-304/dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72: the server could not find the requested resource (get pods dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72)
Oct  6 20:30:10.176: INFO: Unable to read jessie_udp@dns-test-service from pod dns-304/dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72: the server could not find the requested resource (get pods dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72)
Oct  6 20:30:10.179: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-304/dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72: the server could not find the requested resource (get pods dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72)
Oct  6 20:30:10.182: INFO: Unable to read jessie_udp@dns-test-service.dns-304 from pod dns-304/dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72: the server could not find the requested resource (get pods dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72)
Oct  6 20:30:10.186: INFO: Unable to read jessie_tcp@dns-test-service.dns-304 from pod dns-304/dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72: the server could not find the requested resource (get pods dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72)
Oct  6 20:30:10.196: INFO: Unable to read jessie_udp@dns-test-service.dns-304.svc from pod dns-304/dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72: the server could not find the requested resource (get pods dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72)
Oct  6 20:30:10.201: INFO: Unable to read jessie_tcp@dns-test-service.dns-304.svc from pod dns-304/dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72: the server could not find the requested resource (get pods dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72)
Oct  6 20:30:10.203: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-304.svc from pod dns-304/dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72: the server could not find the requested resource (get pods dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72)
Oct  6 20:30:10.206: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-304.svc from pod dns-304/dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72: the server could not find the requested resource (get pods dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72)
Oct  6 20:30:10.248: INFO: Lookups using dns-304/dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-304 wheezy_tcp@dns-test-service.dns-304 wheezy_udp@dns-test-service.dns-304.svc wheezy_tcp@dns-test-service.dns-304.svc wheezy_udp@_http._tcp.dns-test-service.dns-304.svc wheezy_tcp@_http._tcp.dns-test-service.dns-304.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-304 jessie_tcp@dns-test-service.dns-304 jessie_udp@dns-test-service.dns-304.svc jessie_tcp@dns-test-service.dns-304.svc jessie_udp@_http._tcp.dns-test-service.dns-304.svc jessie_tcp@_http._tcp.dns-test-service.dns-304.svc]

Oct  6 20:30:15.255: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-304/dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72: the server could not find the requested resource (get pods dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72)
Oct  6 20:30:15.259: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-304/dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72: the server could not find the requested resource (get pods dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72)
Oct  6 20:30:15.264: INFO: Unable to read wheezy_udp@dns-test-service.dns-304 from pod dns-304/dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72: the server could not find the requested resource (get pods dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72)
Oct  6 20:30:15.267: INFO: Unable to read wheezy_tcp@dns-test-service.dns-304 from pod dns-304/dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72: the server could not find the requested resource (get pods dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72)
Oct  6 20:30:15.271: INFO: Unable to read wheezy_udp@dns-test-service.dns-304.svc from pod dns-304/dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72: the server could not find the requested resource (get pods dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72)
Oct  6 20:30:15.275: INFO: Unable to read wheezy_tcp@dns-test-service.dns-304.svc from pod dns-304/dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72: the server could not find the requested resource (get pods dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72)
Oct  6 20:30:15.279: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-304.svc from pod dns-304/dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72: the server could not find the requested resource (get pods dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72)
Oct  6 20:30:15.283: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-304.svc from pod dns-304/dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72: the server could not find the requested resource (get pods dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72)
Oct  6 20:30:15.313: INFO: Unable to read jessie_udp@dns-test-service from pod dns-304/dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72: the server could not find the requested resource (get pods dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72)
Oct  6 20:30:15.318: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-304/dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72: the server could not find the requested resource (get pods dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72)
Oct  6 20:30:15.323: INFO: Unable to read jessie_udp@dns-test-service.dns-304 from pod dns-304/dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72: the server could not find the requested resource (get pods dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72)
Oct  6 20:30:15.327: INFO: Unable to read jessie_tcp@dns-test-service.dns-304 from pod dns-304/dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72: the server could not find the requested resource (get pods dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72)
Oct  6 20:30:15.338: INFO: Unable to read jessie_udp@dns-test-service.dns-304.svc from pod dns-304/dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72: the server could not find the requested resource (get pods dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72)
Oct  6 20:30:15.343: INFO: Unable to read jessie_tcp@dns-test-service.dns-304.svc from pod dns-304/dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72: the server could not find the requested resource (get pods dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72)
Oct  6 20:30:15.346: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-304.svc from pod dns-304/dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72: the server could not find the requested resource (get pods dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72)
Oct  6 20:30:15.358: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-304.svc from pod dns-304/dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72: the server could not find the requested resource (get pods dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72)
Oct  6 20:30:15.384: INFO: Lookups using dns-304/dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-304 wheezy_tcp@dns-test-service.dns-304 wheezy_udp@dns-test-service.dns-304.svc wheezy_tcp@dns-test-service.dns-304.svc wheezy_udp@_http._tcp.dns-test-service.dns-304.svc wheezy_tcp@_http._tcp.dns-test-service.dns-304.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-304 jessie_tcp@dns-test-service.dns-304 jessie_udp@dns-test-service.dns-304.svc jessie_tcp@dns-test-service.dns-304.svc jessie_udp@_http._tcp.dns-test-service.dns-304.svc jessie_tcp@_http._tcp.dns-test-service.dns-304.svc]

Oct  6 20:30:20.286: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-304/dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72: the server could not find the requested resource (get pods dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72)
Oct  6 20:30:20.291: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-304/dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72: the server could not find the requested resource (get pods dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72)
Oct  6 20:30:20.295: INFO: Unable to read wheezy_udp@dns-test-service.dns-304 from pod dns-304/dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72: the server could not find the requested resource (get pods dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72)
Oct  6 20:30:20.299: INFO: Unable to read wheezy_tcp@dns-test-service.dns-304 from pod dns-304/dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72: the server could not find the requested resource (get pods dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72)
Oct  6 20:30:20.304: INFO: Unable to read wheezy_udp@dns-test-service.dns-304.svc from pod dns-304/dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72: the server could not find the requested resource (get pods dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72)
Oct  6 20:30:20.308: INFO: Unable to read wheezy_tcp@dns-test-service.dns-304.svc from pod dns-304/dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72: the server could not find the requested resource (get pods dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72)
Oct  6 20:30:20.312: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-304.svc from pod dns-304/dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72: the server could not find the requested resource (get pods dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72)
Oct  6 20:30:20.317: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-304.svc from pod dns-304/dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72: the server could not find the requested resource (get pods dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72)
Oct  6 20:30:20.345: INFO: Unable to read jessie_udp@dns-test-service from pod dns-304/dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72: the server could not find the requested resource (get pods dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72)
Oct  6 20:30:20.350: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-304/dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72: the server could not find the requested resource (get pods dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72)
Oct  6 20:30:20.355: INFO: Unable to read jessie_udp@dns-test-service.dns-304 from pod dns-304/dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72: the server could not find the requested resource (get pods dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72)
Oct  6 20:30:20.359: INFO: Unable to read jessie_tcp@dns-test-service.dns-304 from pod dns-304/dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72: the server could not find the requested resource (get pods dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72)
Oct  6 20:30:20.363: INFO: Unable to read jessie_udp@dns-test-service.dns-304.svc from pod dns-304/dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72: the server could not find the requested resource (get pods dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72)
Oct  6 20:30:20.368: INFO: Unable to read jessie_tcp@dns-test-service.dns-304.svc from pod dns-304/dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72: the server could not find the requested resource (get pods dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72)
Oct  6 20:30:20.372: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-304.svc from pod dns-304/dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72: the server could not find the requested resource (get pods dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72)
Oct  6 20:30:20.376: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-304.svc from pod dns-304/dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72: the server could not find the requested resource (get pods dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72)
Oct  6 20:30:20.397: INFO: Lookups using dns-304/dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-304 wheezy_tcp@dns-test-service.dns-304 wheezy_udp@dns-test-service.dns-304.svc wheezy_tcp@dns-test-service.dns-304.svc wheezy_udp@_http._tcp.dns-test-service.dns-304.svc wheezy_tcp@_http._tcp.dns-test-service.dns-304.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-304 jessie_tcp@dns-test-service.dns-304 jessie_udp@dns-test-service.dns-304.svc jessie_tcp@dns-test-service.dns-304.svc jessie_udp@_http._tcp.dns-test-service.dns-304.svc jessie_tcp@_http._tcp.dns-test-service.dns-304.svc]

Oct  6 20:30:25.256: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-304/dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72: the server could not find the requested resource (get pods dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72)
Oct  6 20:30:25.261: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-304/dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72: the server could not find the requested resource (get pods dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72)
Oct  6 20:30:25.265: INFO: Unable to read wheezy_udp@dns-test-service.dns-304 from pod dns-304/dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72: the server could not find the requested resource (get pods dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72)
Oct  6 20:30:25.269: INFO: Unable to read wheezy_tcp@dns-test-service.dns-304 from pod dns-304/dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72: the server could not find the requested resource (get pods dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72)
Oct  6 20:30:25.274: INFO: Unable to read wheezy_udp@dns-test-service.dns-304.svc from pod dns-304/dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72: the server could not find the requested resource (get pods dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72)
Oct  6 20:30:25.278: INFO: Unable to read wheezy_tcp@dns-test-service.dns-304.svc from pod dns-304/dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72: the server could not find the requested resource (get pods dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72)
Oct  6 20:30:25.282: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-304.svc from pod dns-304/dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72: the server could not find the requested resource (get pods dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72)
Oct  6 20:30:25.287: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-304.svc from pod dns-304/dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72: the server could not find the requested resource (get pods dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72)
Oct  6 20:30:25.317: INFO: Unable to read jessie_udp@dns-test-service from pod dns-304/dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72: the server could not find the requested resource (get pods dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72)
Oct  6 20:30:25.321: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-304/dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72: the server could not find the requested resource (get pods dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72)
Oct  6 20:30:25.326: INFO: Unable to read jessie_udp@dns-test-service.dns-304 from pod dns-304/dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72: the server could not find the requested resource (get pods dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72)
Oct  6 20:30:25.330: INFO: Unable to read jessie_tcp@dns-test-service.dns-304 from pod dns-304/dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72: the server could not find the requested resource (get pods dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72)
Oct  6 20:30:25.334: INFO: Unable to read jessie_udp@dns-test-service.dns-304.svc from pod dns-304/dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72: the server could not find the requested resource (get pods dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72)
Oct  6 20:30:25.338: INFO: Unable to read jessie_tcp@dns-test-service.dns-304.svc from pod dns-304/dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72: the server could not find the requested resource (get pods dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72)
Oct  6 20:30:25.342: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-304.svc from pod dns-304/dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72: the server could not find the requested resource (get pods dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72)
Oct  6 20:30:25.347: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-304.svc from pod dns-304/dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72: the server could not find the requested resource (get pods dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72)
Oct  6 20:30:25.373: INFO: Lookups using dns-304/dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-304 wheezy_tcp@dns-test-service.dns-304 wheezy_udp@dns-test-service.dns-304.svc wheezy_tcp@dns-test-service.dns-304.svc wheezy_udp@_http._tcp.dns-test-service.dns-304.svc wheezy_tcp@_http._tcp.dns-test-service.dns-304.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-304 jessie_tcp@dns-test-service.dns-304 jessie_udp@dns-test-service.dns-304.svc jessie_tcp@dns-test-service.dns-304.svc jessie_udp@_http._tcp.dns-test-service.dns-304.svc jessie_tcp@_http._tcp.dns-test-service.dns-304.svc]

Oct  6 20:30:30.254: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-304/dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72: the server could not find the requested resource (get pods dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72)
Oct  6 20:30:30.257: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-304/dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72: the server could not find the requested resource (get pods dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72)
Oct  6 20:30:30.260: INFO: Unable to read wheezy_udp@dns-test-service.dns-304 from pod dns-304/dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72: the server could not find the requested resource (get pods dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72)
Oct  6 20:30:30.263: INFO: Unable to read wheezy_tcp@dns-test-service.dns-304 from pod dns-304/dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72: the server could not find the requested resource (get pods dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72)
Oct  6 20:30:30.266: INFO: Unable to read wheezy_udp@dns-test-service.dns-304.svc from pod dns-304/dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72: the server could not find the requested resource (get pods dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72)
Oct  6 20:30:30.268: INFO: Unable to read wheezy_tcp@dns-test-service.dns-304.svc from pod dns-304/dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72: the server could not find the requested resource (get pods dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72)
Oct  6 20:30:30.272: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-304.svc from pod dns-304/dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72: the server could not find the requested resource (get pods dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72)
Oct  6 20:30:30.275: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-304.svc from pod dns-304/dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72: the server could not find the requested resource (get pods dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72)
Oct  6 20:30:30.300: INFO: Unable to read jessie_udp@dns-test-service from pod dns-304/dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72: the server could not find the requested resource (get pods dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72)
Oct  6 20:30:30.303: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-304/dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72: the server could not find the requested resource (get pods dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72)
Oct  6 20:30:30.345: INFO: Unable to read jessie_udp@dns-test-service.dns-304 from pod dns-304/dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72: the server could not find the requested resource (get pods dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72)
Oct  6 20:30:30.349: INFO: Unable to read jessie_tcp@dns-test-service.dns-304 from pod dns-304/dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72: the server could not find the requested resource (get pods dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72)
Oct  6 20:30:30.376: INFO: Unable to read jessie_udp@dns-test-service.dns-304.svc from pod dns-304/dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72: the server could not find the requested resource (get pods dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72)
Oct  6 20:30:30.406: INFO: Unable to read jessie_tcp@dns-test-service.dns-304.svc from pod dns-304/dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72: the server could not find the requested resource (get pods dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72)
Oct  6 20:30:30.411: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-304.svc from pod dns-304/dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72: the server could not find the requested resource (get pods dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72)
Oct  6 20:30:30.416: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-304.svc from pod dns-304/dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72: the server could not find the requested resource (get pods dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72)
Oct  6 20:30:30.440: INFO: Lookups using dns-304/dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-304 wheezy_tcp@dns-test-service.dns-304 wheezy_udp@dns-test-service.dns-304.svc wheezy_tcp@dns-test-service.dns-304.svc wheezy_udp@_http._tcp.dns-test-service.dns-304.svc wheezy_tcp@_http._tcp.dns-test-service.dns-304.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-304 jessie_tcp@dns-test-service.dns-304 jessie_udp@dns-test-service.dns-304.svc jessie_tcp@dns-test-service.dns-304.svc jessie_udp@_http._tcp.dns-test-service.dns-304.svc jessie_tcp@_http._tcp.dns-test-service.dns-304.svc]

Oct  6 20:30:35.258: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-304/dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72: the server could not find the requested resource (get pods dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72)
Oct  6 20:30:35.263: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-304/dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72: the server could not find the requested resource (get pods dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72)
Oct  6 20:30:35.267: INFO: Unable to read wheezy_udp@dns-test-service.dns-304 from pod dns-304/dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72: the server could not find the requested resource (get pods dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72)
Oct  6 20:30:35.271: INFO: Unable to read wheezy_tcp@dns-test-service.dns-304 from pod dns-304/dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72: the server could not find the requested resource (get pods dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72)
Oct  6 20:30:35.274: INFO: Unable to read wheezy_udp@dns-test-service.dns-304.svc from pod dns-304/dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72: the server could not find the requested resource (get pods dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72)
Oct  6 20:30:35.277: INFO: Unable to read wheezy_tcp@dns-test-service.dns-304.svc from pod dns-304/dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72: the server could not find the requested resource (get pods dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72)
Oct  6 20:30:35.281: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-304.svc from pod dns-304/dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72: the server could not find the requested resource (get pods dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72)
Oct  6 20:30:35.285: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-304.svc from pod dns-304/dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72: the server could not find the requested resource (get pods dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72)
Oct  6 20:30:35.315: INFO: Unable to read jessie_udp@dns-test-service from pod dns-304/dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72: the server could not find the requested resource (get pods dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72)
Oct  6 20:30:35.320: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-304/dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72: the server could not find the requested resource (get pods dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72)
Oct  6 20:30:35.325: INFO: Unable to read jessie_udp@dns-test-service.dns-304 from pod dns-304/dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72: the server could not find the requested resource (get pods dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72)
Oct  6 20:30:35.329: INFO: Unable to read jessie_tcp@dns-test-service.dns-304 from pod dns-304/dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72: the server could not find the requested resource (get pods dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72)
Oct  6 20:30:35.333: INFO: Unable to read jessie_udp@dns-test-service.dns-304.svc from pod dns-304/dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72: the server could not find the requested resource (get pods dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72)
Oct  6 20:30:35.338: INFO: Unable to read jessie_tcp@dns-test-service.dns-304.svc from pod dns-304/dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72: the server could not find the requested resource (get pods dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72)
Oct  6 20:30:35.342: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-304.svc from pod dns-304/dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72: the server could not find the requested resource (get pods dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72)
Oct  6 20:30:35.347: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-304.svc from pod dns-304/dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72: the server could not find the requested resource (get pods dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72)
Oct  6 20:30:35.371: INFO: Lookups using dns-304/dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-304 wheezy_tcp@dns-test-service.dns-304 wheezy_udp@dns-test-service.dns-304.svc wheezy_tcp@dns-test-service.dns-304.svc wheezy_udp@_http._tcp.dns-test-service.dns-304.svc wheezy_tcp@_http._tcp.dns-test-service.dns-304.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-304 jessie_tcp@dns-test-service.dns-304 jessie_udp@dns-test-service.dns-304.svc jessie_tcp@dns-test-service.dns-304.svc jessie_udp@_http._tcp.dns-test-service.dns-304.svc jessie_tcp@_http._tcp.dns-test-service.dns-304.svc]

Oct  6 20:30:40.425: INFO: DNS probes using dns-304/dns-test-ea2dc6c8-02c2-4ec6-a918-203786856c72 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 20:30:41.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-304" for this suite.

• [SLOW TEST:37.581 seconds]
[sig-network] DNS
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":278,"completed":78,"skipped":1300,"failed":0}
SS
------------------------------
[sig-cli] Kubectl client Kubectl run job 
  should create a job from an image when restart is OnFailure [Deprecated] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 20:30:41.498: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Kubectl run job
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685
[It] should create a job from an image when restart is OnFailure [Deprecated] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Oct  6 20:30:41.649: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-4958'
Oct  6 20:30:42.955: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Oct  6 20:30:42.955: INFO: stdout: "job.batch/e2e-test-httpd-job created\n"
STEP: verifying the job e2e-test-httpd-job was created
[AfterEach] Kubectl run job
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690
Oct  6 20:30:42.987: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-httpd-job --namespace=kubectl-4958'
Oct  6 20:30:44.232: INFO: stderr: ""
Oct  6 20:30:44.232: INFO: stdout: "job.batch \"e2e-test-httpd-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 20:30:44.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4958" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Deprecated] [Conformance]","total":278,"completed":79,"skipped":1302,"failed":0}
SSS
------------------------------
[sig-apps] Job 
  should delete a job [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Job
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 20:30:44.247: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete a job [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: delete a job
STEP: deleting Job.batch foo in namespace job-7775, will wait for the garbage collector to delete the pods
Oct  6 20:30:50.397: INFO: Deleting Job.batch foo took: 9.835256ms
Oct  6 20:30:50.498: INFO: Terminating Job.batch foo pods took: 100.944154ms
STEP: Ensuring job was deleted
[AfterEach] [sig-apps] Job
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 20:31:33.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-7775" for this suite.

• [SLOW TEST:49.575 seconds]
[sig-apps] Job
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete a job [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":278,"completed":80,"skipped":1305,"failed":0}
SSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 20:31:33.824: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0666 on tmpfs
Oct  6 20:31:33.920: INFO: Waiting up to 5m0s for pod "pod-bee7434f-e905-4088-a56c-fdd71888a099" in namespace "emptydir-7903" to be "success or failure"
Oct  6 20:31:33.926: INFO: Pod "pod-bee7434f-e905-4088-a56c-fdd71888a099": Phase="Pending", Reason="", readiness=false. Elapsed: 5.827471ms
Oct  6 20:31:35.933: INFO: Pod "pod-bee7434f-e905-4088-a56c-fdd71888a099": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012746387s
Oct  6 20:31:37.955: INFO: Pod "pod-bee7434f-e905-4088-a56c-fdd71888a099": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035288112s
STEP: Saw pod success
Oct  6 20:31:37.956: INFO: Pod "pod-bee7434f-e905-4088-a56c-fdd71888a099" satisfied condition "success or failure"
Oct  6 20:31:37.960: INFO: Trying to get logs from node jerma-worker2 pod pod-bee7434f-e905-4088-a56c-fdd71888a099 container test-container: 
STEP: delete the pod
Oct  6 20:31:38.002: INFO: Waiting for pod pod-bee7434f-e905-4088-a56c-fdd71888a099 to disappear
Oct  6 20:31:38.019: INFO: Pod pod-bee7434f-e905-4088-a56c-fdd71888a099 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 20:31:38.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7903" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":81,"skipped":1312,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with terminating scopes. [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 20:31:38.051: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify ResourceQuota with terminating scopes. [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a ResourceQuota with terminating scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a ResourceQuota with not terminating scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a long running pod
STEP: Ensuring resource quota with not terminating scope captures the pod usage
STEP: Ensuring resource quota with terminating scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
STEP: Creating a terminating pod
STEP: Ensuring resource quota with terminating scope captures the pod usage
STEP: Ensuring resource quota with not terminating scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 20:31:54.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-549" for this suite.

• [SLOW TEST:16.244 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with terminating scopes. [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":278,"completed":82,"skipped":1345,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Networking
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 20:31:54.297: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Performing setup for networking test in namespace pod-network-test-3307
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Oct  6 20:31:54.372: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Oct  6 20:32:18.562: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.30:8080/dial?request=hostname&protocol=udp&host=10.244.2.180&port=8081&tries=1'] Namespace:pod-network-test-3307 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Oct  6 20:32:18.562: INFO: >>> kubeConfig: /root/.kube/config
I1006 20:32:18.616329       7 log.go:172] (0x400291b8c0) (0x4002959360) Create stream
I1006 20:32:18.616484       7 log.go:172] (0x400291b8c0) (0x4002959360) Stream added, broadcasting: 1
I1006 20:32:18.619977       7 log.go:172] (0x400291b8c0) Reply frame received for 1
I1006 20:32:18.620134       7 log.go:172] (0x400291b8c0) (0x40016a2fa0) Create stream
I1006 20:32:18.620224       7 log.go:172] (0x400291b8c0) (0x40016a2fa0) Stream added, broadcasting: 3
I1006 20:32:18.621883       7 log.go:172] (0x400291b8c0) Reply frame received for 3
I1006 20:32:18.622020       7 log.go:172] (0x400291b8c0) (0x4002959540) Create stream
I1006 20:32:18.622107       7 log.go:172] (0x400291b8c0) (0x4002959540) Stream added, broadcasting: 5
I1006 20:32:18.623533       7 log.go:172] (0x400291b8c0) Reply frame received for 5
I1006 20:32:18.722082       7 log.go:172] (0x400291b8c0) Data frame received for 3
I1006 20:32:18.722462       7 log.go:172] (0x40016a2fa0) (3) Data frame handling
I1006 20:32:18.722629       7 log.go:172] (0x40016a2fa0) (3) Data frame sent
I1006 20:32:18.722790       7 log.go:172] (0x400291b8c0) Data frame received for 5
I1006 20:32:18.722954       7 log.go:172] (0x4002959540) (5) Data frame handling
I1006 20:32:18.723181       7 log.go:172] (0x400291b8c0) Data frame received for 3
I1006 20:32:18.723412       7 log.go:172] (0x40016a2fa0) (3) Data frame handling
I1006 20:32:18.724712       7 log.go:172] (0x400291b8c0) Data frame received for 1
I1006 20:32:18.724958       7 log.go:172] (0x4002959360) (1) Data frame handling
I1006 20:32:18.725097       7 log.go:172] (0x4002959360) (1) Data frame sent
I1006 20:32:18.725226       7 log.go:172] (0x400291b8c0) (0x4002959360) Stream removed, broadcasting: 1
I1006 20:32:18.725375       7 log.go:172] (0x400291b8c0) Go away received
I1006 20:32:18.725754       7 log.go:172] (0x400291b8c0) (0x4002959360) Stream removed, broadcasting: 1
I1006 20:32:18.725905       7 log.go:172] (0x400291b8c0) (0x40016a2fa0) Stream removed, broadcasting: 3
I1006 20:32:18.726035       7 log.go:172] (0x400291b8c0) (0x4002959540) Stream removed, broadcasting: 5
Oct  6 20:32:18.726: INFO: Waiting for responses: map[]
Oct  6 20:32:18.732: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.30:8080/dial?request=hostname&protocol=udp&host=10.244.1.29&port=8081&tries=1'] Namespace:pod-network-test-3307 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Oct  6 20:32:18.733: INFO: >>> kubeConfig: /root/.kube/config
I1006 20:32:18.788735       7 log.go:172] (0x4002c064d0) (0x4000ac6780) Create stream
I1006 20:32:18.789015       7 log.go:172] (0x4002c064d0) (0x4000ac6780) Stream added, broadcasting: 1
I1006 20:32:18.792400       7 log.go:172] (0x4002c064d0) Reply frame received for 1
I1006 20:32:18.792611       7 log.go:172] (0x4002c064d0) (0x4000ac6820) Create stream
I1006 20:32:18.792718       7 log.go:172] (0x4002c064d0) (0x4000ac6820) Stream added, broadcasting: 3
I1006 20:32:18.794581       7 log.go:172] (0x4002c064d0) Reply frame received for 3
I1006 20:32:18.794727       7 log.go:172] (0x4002c064d0) (0x4002959680) Create stream
I1006 20:32:18.794828       7 log.go:172] (0x4002c064d0) (0x4002959680) Stream added, broadcasting: 5
I1006 20:32:18.796328       7 log.go:172] (0x4002c064d0) Reply frame received for 5
I1006 20:32:18.870859       7 log.go:172] (0x4002c064d0) Data frame received for 3
I1006 20:32:18.871056       7 log.go:172] (0x4000ac6820) (3) Data frame handling
I1006 20:32:18.871278       7 log.go:172] (0x4000ac6820) (3) Data frame sent
I1006 20:32:18.871470       7 log.go:172] (0x4002c064d0) Data frame received for 5
I1006 20:32:18.871642       7 log.go:172] (0x4002959680) (5) Data frame handling
I1006 20:32:18.871866       7 log.go:172] (0x4002c064d0) Data frame received for 3
I1006 20:32:18.872037       7 log.go:172] (0x4000ac6820) (3) Data frame handling
I1006 20:32:18.873638       7 log.go:172] (0x4002c064d0) Data frame received for 1
I1006 20:32:18.873819       7 log.go:172] (0x4000ac6780) (1) Data frame handling
I1006 20:32:18.874018       7 log.go:172] (0x4000ac6780) (1) Data frame sent
I1006 20:32:18.874199       7 log.go:172] (0x4002c064d0) (0x4000ac6780) Stream removed, broadcasting: 1
I1006 20:32:18.874360       7 log.go:172] (0x4002c064d0) Go away received
I1006 20:32:18.874859       7 log.go:172] (0x4002c064d0) (0x4000ac6780) Stream removed, broadcasting: 1
I1006 20:32:18.875058       7 log.go:172] (0x4002c064d0) (0x4000ac6820) Stream removed, broadcasting: 3
I1006 20:32:18.875209       7 log.go:172] (0x4002c064d0) (0x4002959680) Stream removed, broadcasting: 5
Oct  6 20:32:18.875: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 20:32:18.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-3307" for this suite.

• [SLOW TEST:24.594 seconds]
[sig-network] Networking
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":83,"skipped":1371,"failed":0}
SSSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 20:32:18.892: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should be submitted and removed [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
Oct  6 20:32:19.003: INFO: observed the pod list
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 20:32:34.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-876" for this suite.

• [SLOW TEST:15.451 seconds]
[k8s.io] Pods
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be submitted and removed [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":278,"completed":84,"skipped":1377,"failed":0}
SSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 20:32:34.344: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should retry creating failed daemon pods [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Oct  6 20:32:34.504: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct  6 20:32:34.514: INFO: Number of nodes with available pods: 0
Oct  6 20:32:34.515: INFO: Node jerma-worker is running more than one daemon pod
Oct  6 20:32:35.528: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct  6 20:32:35.535: INFO: Number of nodes with available pods: 0
Oct  6 20:32:35.535: INFO: Node jerma-worker is running more than one daemon pod
Oct  6 20:32:36.669: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct  6 20:32:36.674: INFO: Number of nodes with available pods: 0
Oct  6 20:32:36.674: INFO: Node jerma-worker is running more than one daemon pod
Oct  6 20:32:37.526: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct  6 20:32:37.531: INFO: Number of nodes with available pods: 0
Oct  6 20:32:37.531: INFO: Node jerma-worker is running more than one daemon pod
Oct  6 20:32:38.526: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct  6 20:32:38.532: INFO: Number of nodes with available pods: 1
Oct  6 20:32:38.532: INFO: Node jerma-worker2 is running more than one daemon pod
Oct  6 20:32:39.526: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct  6 20:32:39.533: INFO: Number of nodes with available pods: 2
Oct  6 20:32:39.533: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Oct  6 20:32:39.629: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct  6 20:32:39.652: INFO: Number of nodes with available pods: 1
Oct  6 20:32:39.652: INFO: Node jerma-worker is running more than one daemon pod
Oct  6 20:32:40.664: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct  6 20:32:40.670: INFO: Number of nodes with available pods: 1
Oct  6 20:32:40.670: INFO: Node jerma-worker is running more than one daemon pod
Oct  6 20:32:41.662: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct  6 20:32:41.667: INFO: Number of nodes with available pods: 1
Oct  6 20:32:41.667: INFO: Node jerma-worker is running more than one daemon pod
Oct  6 20:32:42.663: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct  6 20:32:42.669: INFO: Number of nodes with available pods: 2
Oct  6 20:32:42.669: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8645, will wait for the garbage collector to delete the pods
Oct  6 20:32:42.741: INFO: Deleting DaemonSet.extensions daemon-set took: 9.072461ms
Oct  6 20:32:43.242: INFO: Terminating DaemonSet.extensions daemon-set pods took: 500.942706ms
Oct  6 20:32:54.347: INFO: Number of nodes with available pods: 0
Oct  6 20:32:54.347: INFO: Number of running nodes: 0, number of available pods: 0
Oct  6 20:32:54.351: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8645/daemonsets","resourceVersion":"3603749"},"items":null}

Oct  6 20:32:54.354: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8645/pods","resourceVersion":"3603749"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 20:32:54.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-8645" for this suite.

• [SLOW TEST:20.058 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should retry creating failed daemon pods [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":278,"completed":85,"skipped":1381,"failed":0}
SS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-auth] ServiceAccounts
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 20:32:54.403: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: getting the auto-created API token
Oct  6 20:32:55.086: INFO: created pod pod-service-account-defaultsa
Oct  6 20:32:55.086: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Oct  6 20:32:55.108: INFO: created pod pod-service-account-mountsa
Oct  6 20:32:55.108: INFO: pod pod-service-account-mountsa service account token volume mount: true
Oct  6 20:32:55.126: INFO: created pod pod-service-account-nomountsa
Oct  6 20:32:55.126: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Oct  6 20:32:55.193: INFO: created pod pod-service-account-defaultsa-mountspec
Oct  6 20:32:55.193: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Oct  6 20:32:55.231: INFO: created pod pod-service-account-mountsa-mountspec
Oct  6 20:32:55.231: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Oct  6 20:32:55.269: INFO: created pod pod-service-account-nomountsa-mountspec
Oct  6 20:32:55.269: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Oct  6 20:32:55.275: INFO: created pod pod-service-account-defaultsa-nomountspec
Oct  6 20:32:55.276: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Oct  6 20:32:55.342: INFO: created pod pod-service-account-mountsa-nomountspec
Oct  6 20:32:55.343: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Oct  6 20:32:55.356: INFO: created pod pod-service-account-nomountsa-nomountspec
Oct  6 20:32:55.356: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 20:32:55.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-3918" for this suite.
•{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","total":278,"completed":86,"skipped":1383,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should create and stop a replication controller  [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 20:32:55.500: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Update Demo
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:325
[It] should create and stop a replication controller  [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a replication controller
Oct  6 20:32:55.589: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8915'
Oct  6 20:32:58.171: INFO: stderr: ""
Oct  6 20:32:58.171: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Oct  6 20:32:58.172: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8915'
Oct  6 20:32:59.696: INFO: stderr: ""
Oct  6 20:32:59.696: INFO: stdout: "update-demo-nautilus-tkdcq update-demo-nautilus-xzg55 "
Oct  6 20:32:59.696: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tkdcq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8915'
Oct  6 20:33:01.029: INFO: stderr: ""
Oct  6 20:33:01.029: INFO: stdout: ""
Oct  6 20:33:01.030: INFO: update-demo-nautilus-tkdcq is created but not running
Oct  6 20:33:06.031: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8915'
Oct  6 20:33:07.484: INFO: stderr: ""
Oct  6 20:33:07.484: INFO: stdout: "update-demo-nautilus-tkdcq update-demo-nautilus-xzg55 "
Oct  6 20:33:07.484: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tkdcq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8915'
Oct  6 20:33:08.776: INFO: stderr: ""
Oct  6 20:33:08.776: INFO: stdout: "true"
Oct  6 20:33:08.777: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tkdcq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8915'
Oct  6 20:33:10.010: INFO: stderr: ""
Oct  6 20:33:10.011: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Oct  6 20:33:10.011: INFO: validating pod update-demo-nautilus-tkdcq
Oct  6 20:33:10.033: INFO: got data: {
  "image": "nautilus.jpg"
}

Oct  6 20:33:10.034: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Oct  6 20:33:10.035: INFO: update-demo-nautilus-tkdcq is verified up and running
Oct  6 20:33:10.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xzg55 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8915'
Oct  6 20:33:11.320: INFO: stderr: ""
Oct  6 20:33:11.321: INFO: stdout: "true"
Oct  6 20:33:11.321: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xzg55 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8915'
Oct  6 20:33:12.574: INFO: stderr: ""
Oct  6 20:33:12.574: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Oct  6 20:33:12.574: INFO: validating pod update-demo-nautilus-xzg55
Oct  6 20:33:12.584: INFO: got data: {
  "image": "nautilus.jpg"
}

Oct  6 20:33:12.585: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Oct  6 20:33:12.585: INFO: update-demo-nautilus-xzg55 is verified up and running
STEP: using delete to clean up resources
Oct  6 20:33:12.586: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8915'
Oct  6 20:33:13.820: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Oct  6 20:33:13.821: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Oct  6 20:33:13.821: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-8915'
Oct  6 20:33:15.050: INFO: stderr: "No resources found in kubectl-8915 namespace.\n"
Oct  6 20:33:15.050: INFO: stdout: ""
Oct  6 20:33:15.051: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-8915 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Oct  6 20:33:16.355: INFO: stderr: ""
Oct  6 20:33:16.355: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 20:33:16.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8915" for this suite.

• [SLOW TEST:20.868 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:323
    should create and stop a replication controller  [Conformance]
    /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","total":278,"completed":87,"skipped":1398,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 20:33:16.369: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] deployment should support proportional scaling [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Oct  6 20:33:16.465: INFO: Creating deployment "webserver-deployment"
Oct  6 20:33:16.471: INFO: Waiting for observed generation 1
Oct  6 20:33:18.565: INFO: Waiting for all required pods to come up
Oct  6 20:33:18.627: INFO: Pod name httpd: Found 10 pods out of 10
STEP: ensuring each pod is running
Oct  6 20:33:28.646: INFO: Waiting for deployment "webserver-deployment" to complete
Oct  6 20:33:28.656: INFO: Updating deployment "webserver-deployment" with a non-existent image
Oct  6 20:33:28.667: INFO: Updating deployment webserver-deployment
Oct  6 20:33:28.667: INFO: Waiting for observed generation 2
Oct  6 20:33:30.810: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Oct  6 20:33:30.816: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Oct  6 20:33:30.822: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas
Oct  6 20:33:30.838: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Oct  6 20:33:30.838: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Oct  6 20:33:30.843: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas
Oct  6 20:33:30.848: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas
Oct  6 20:33:30.848: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30
Oct  6 20:33:30.856: INFO: Updating deployment webserver-deployment
Oct  6 20:33:30.856: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas
Oct  6 20:33:31.212: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Oct  6 20:33:33.815: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Oct  6 20:33:35.314: INFO: Deployment "webserver-deployment":
&Deployment{ObjectMeta:{webserver-deployment  deployment-1406 /apis/apps/v1/namespaces/deployment-1406/deployments/webserver-deployment 6664f13e-f3f0-40d8-b743-f872b9c60f2e 3604278 3 2020-10-06 20:33:16 +0000 UTC   map[name:httpd] map[deployment.kubernetes.io/revision:2] [] []  []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x4004231da8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-10-06 20:33:31 +0000 UTC,LastTransitionTime:2020-10-06 20:33:31 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-10-06 20:33:31 +0000 UTC,LastTransitionTime:2020-10-06 20:33:16 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},}

Oct  6 20:33:36.185: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment":
&ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8  deployment-1406 /apis/apps/v1/namespaces/deployment-1406/replicasets/webserver-deployment-c7997dcc8 c2c11033-0153-40e7-bbb1-8cc4e87067be 3604273 3 2020-10-06 20:33:28 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 6664f13e-f3f0-40d8-b743-f872b9c60f2e 0x4002ef06b7 0x4002ef06b8}] []  []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x4002ef0728  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Oct  6 20:33:36.186: INFO: All old ReplicaSets of Deployment "webserver-deployment":
Oct  6 20:33:36.186: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587  deployment-1406 /apis/apps/v1/namespaces/deployment-1406/replicasets/webserver-deployment-595b5b9587 f1c681ff-9e98-4b0e-9746-5b6b1c457169 3604251 3 2020-10-06 20:33:16 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 6664f13e-f3f0-40d8-b743-f872b9c60f2e 0x4002ef05f7 0x4002ef05f8}] []  []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x4002ef0658  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},}
Oct  6 20:33:36.377: INFO: Pod "webserver-deployment-595b5b9587-6ms4b" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-6ms4b webserver-deployment-595b5b9587- deployment-1406 /api/v1/namespaces/deployment-1406/pods/webserver-deployment-595b5b9587-6ms4b f68074c5-773d-47b6-8e0b-f4ebc2961d9a 3604098 0 2020-10-06 20:33:16 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f1c681ff-9e98-4b0e-9746-5b6b1c457169 0x40033f21c7 0x40033f21c8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qsx7q,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qsx7q,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qsx7q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.10,PodIP:10.244.1.39,StartTime:2020-10-06 20:33:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-10-06 20:33:26 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://67f8032c677973530c1f8f61ea64759ab58e0359f5205c02862cfb414696f097,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.39,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Oct  6 20:33:36.378: INFO: Pod "webserver-deployment-595b5b9587-6nrdj" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-6nrdj webserver-deployment-595b5b9587- deployment-1406 /api/v1/namespaces/deployment-1406/pods/webserver-deployment-595b5b9587-6nrdj 80e9f797-635d-476e-848e-166b98833490 3604305 0 2020-10-06 20:33:31 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f1c681ff-9e98-4b0e-9746-5b6b1c457169 0x40033f2347 0x40033f2348}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qsx7q,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qsx7q,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qsx7q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.10,PodIP:,StartTime:2020-10-06 20:33:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Oct  6 20:33:36.379: INFO: Pod "webserver-deployment-595b5b9587-6qzm8" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-6qzm8 webserver-deployment-595b5b9587- deployment-1406 /api/v1/namespaces/deployment-1406/pods/webserver-deployment-595b5b9587-6qzm8 1ef812a1-42e0-4e76-b420-b20bf4b253e9 3604126 0 2020-10-06 20:33:16 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f1c681ff-9e98-4b0e-9746-5b6b1c457169 0x40033f24a7 0x40033f24a8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qsx7q,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qsx7q,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qsx7q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.10,PodIP:10.244.1.40,StartTime:2020-10-06 20:33:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-10-06 20:33:27 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://bfb99a5bb9af6ea6d6b783833c32bd660190d6f6f6999c970275820645c5fbf5,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.40,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Oct  6 20:33:36.380: INFO: Pod "webserver-deployment-595b5b9587-9mqbh" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-9mqbh webserver-deployment-595b5b9587- deployment-1406 /api/v1/namespaces/deployment-1406/pods/webserver-deployment-595b5b9587-9mqbh b9c46e95-61d6-4a65-b9ad-71a3a6ceb5d3 3604270 0 2020-10-06 20:33:31 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f1c681ff-9e98-4b0e-9746-5b6b1c457169 0x40033f2627 0x40033f2628}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qsx7q,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qsx7q,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qsx7q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.9,PodIP:,StartTime:2020-10-06 20:33:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Oct  6 20:33:36.381: INFO: Pod "webserver-deployment-595b5b9587-bnbqq" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-bnbqq webserver-deployment-595b5b9587- deployment-1406 /api/v1/namespaces/deployment-1406/pods/webserver-deployment-595b5b9587-bnbqq 6b956c84-3075-4cd1-9bf8-1ca1b93a10e0 3604080 0 2020-10-06 20:33:16 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f1c681ff-9e98-4b0e-9746-5b6b1c457169 0x40033f27a7 0x40033f27a8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qsx7q,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qsx7q,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qsx7q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.10,PodIP:10.244.1.38,StartTime:2020-10-06 20:33:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-10-06 20:33:25 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://492fe49156d3e03cb2ac7a884fdcc5b804b6272b13c62c77a60d75bdee6093b1,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.38,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Oct  6 20:33:36.382: INFO: Pod "webserver-deployment-595b5b9587-cgsj8" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-cgsj8 webserver-deployment-595b5b9587- deployment-1406 /api/v1/namespaces/deployment-1406/pods/webserver-deployment-595b5b9587-cgsj8 2b2a048b-d60a-4f49-b34e-37bd0c6400b5 3604283 0 2020-10-06 20:33:31 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f1c681ff-9e98-4b0e-9746-5b6b1c457169 0x40033f2927 0x40033f2928}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qsx7q,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qsx7q,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qsx7q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.9,PodIP:,StartTime:2020-10-06 20:33:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Oct  6 20:33:36.383: INFO: Pod "webserver-deployment-595b5b9587-chr95" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-chr95 webserver-deployment-595b5b9587- deployment-1406 /api/v1/namespaces/deployment-1406/pods/webserver-deployment-595b5b9587-chr95 30c1aa81-1185-4d45-b9d1-bac3208bdb3e 3604294 0 2020-10-06 20:33:31 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f1c681ff-9e98-4b0e-9746-5b6b1c457169 0x40033f2a87 0x40033f2a88}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qsx7q,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qsx7q,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qsx7q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.9,PodIP:,StartTime:2020-10-06 20:33:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Oct  6 20:33:36.384: INFO: Pod "webserver-deployment-595b5b9587-kq8fb" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-kq8fb webserver-deployment-595b5b9587- deployment-1406 /api/v1/namespaces/deployment-1406/pods/webserver-deployment-595b5b9587-kq8fb d52a8c65-2ffd-4b2d-a5c9-544e4184682b 3604120 0 2020-10-06 20:33:16 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f1c681ff-9e98-4b0e-9746-5b6b1c457169 0x40033f2be7 0x40033f2be8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qsx7q,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qsx7q,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qsx7q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.9,PodIP:10.244.2.193,StartTime:2020-10-06 20:33:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-10-06 20:33:26 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://cc1b0551509b4294c8a80f6550045c8974b7a5c777c41520130e19163b7ff31e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.193,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Oct  6 20:33:36.385: INFO: Pod "webserver-deployment-595b5b9587-l6crz" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-l6crz webserver-deployment-595b5b9587- deployment-1406 /api/v1/namespaces/deployment-1406/pods/webserver-deployment-595b5b9587-l6crz f823564b-ffa1-4daf-bebe-2f873edb1b29 3604308 0 2020-10-06 20:33:31 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f1c681ff-9e98-4b0e-9746-5b6b1c457169 0x40033f2d67 0x40033f2d68}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qsx7q,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qsx7q,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qsx7q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.9,PodIP:,StartTime:2020-10-06 20:33:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Oct  6 20:33:36.386: INFO: Pod "webserver-deployment-595b5b9587-qrlt6" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-qrlt6 webserver-deployment-595b5b9587- deployment-1406 /api/v1/namespaces/deployment-1406/pods/webserver-deployment-595b5b9587-qrlt6 bf019210-21dc-49d1-93ea-a317e452873e 3604290 0 2020-10-06 20:33:31 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f1c681ff-9e98-4b0e-9746-5b6b1c457169 0x40033f2ec7 0x40033f2ec8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qsx7q,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qsx7q,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qsx7q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.10,PodIP:,StartTime:2020-10-06 20:33:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Oct  6 20:33:36.387: INFO: Pod "webserver-deployment-595b5b9587-rjkhc" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-rjkhc webserver-deployment-595b5b9587- deployment-1406 /api/v1/namespaces/deployment-1406/pods/webserver-deployment-595b5b9587-rjkhc 6b13c55f-fe00-4aad-8ca1-6673cda85517 3604275 0 2020-10-06 20:33:31 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f1c681ff-9e98-4b0e-9746-5b6b1c457169 0x40033f3027 0x40033f3028}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qsx7q,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qsx7q,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qsx7q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.10,PodIP:,StartTime:2020-10-06 20:33:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Oct  6 20:33:36.389: INFO: Pod "webserver-deployment-595b5b9587-sdtwr" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-sdtwr webserver-deployment-595b5b9587- deployment-1406 /api/v1/namespaces/deployment-1406/pods/webserver-deployment-595b5b9587-sdtwr bb5af594-8244-4569-b5fd-ed4a486358a1 3604274 0 2020-10-06 20:33:31 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f1c681ff-9e98-4b0e-9746-5b6b1c457169 0x40033f3187 0x40033f3188}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qsx7q,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qsx7q,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qsx7q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.9,PodIP:,StartTime:2020-10-06 20:33:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Oct  6 20:33:36.389: INFO: Pod "webserver-deployment-595b5b9587-sgjhv" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-sgjhv webserver-deployment-595b5b9587- deployment-1406 /api/v1/namespaces/deployment-1406/pods/webserver-deployment-595b5b9587-sgjhv 201cf1fb-3d06-42a2-b588-052e7aba80b1 3604267 0 2020-10-06 20:33:31 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f1c681ff-9e98-4b0e-9746-5b6b1c457169 0x40033f32e7 0x40033f32e8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qsx7q,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qsx7q,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qsx7q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.9,PodIP:,StartTime:2020-10-06 20:33:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Oct  6 20:33:36.390: INFO: Pod "webserver-deployment-595b5b9587-tccqt" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-tccqt webserver-deployment-595b5b9587- deployment-1406 /api/v1/namespaces/deployment-1406/pods/webserver-deployment-595b5b9587-tccqt c9503551-0a88-4e8f-b515-303b4c930682 3604055 0 2020-10-06 20:33:16 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f1c681ff-9e98-4b0e-9746-5b6b1c457169 0x40033f3447 0x40033f3448}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qsx7q,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qsx7q,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qsx7q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.10,PodIP:10.244.1.37,StartTime:2020-10-06 20:33:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-10-06 20:33:21 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://1caea2e5b5f741ad5a1b5aa5455549fb81327ed15ae8eb8b09b9a8da9e784218,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.37,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Oct  6 20:33:36.391: INFO: Pod "webserver-deployment-595b5b9587-tkbzs" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-tkbzs webserver-deployment-595b5b9587- deployment-1406 /api/v1/namespaces/deployment-1406/pods/webserver-deployment-595b5b9587-tkbzs 3b963719-d182-4034-9f0a-e629597c5554 3604295 0 2020-10-06 20:33:31 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f1c681ff-9e98-4b0e-9746-5b6b1c457169 0x40033f35c7 0x40033f35c8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qsx7q,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qsx7q,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qsx7q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.10,PodIP:,StartTime:2020-10-06 20:33:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Oct  6 20:33:36.392: INFO: Pod "webserver-deployment-595b5b9587-vvksk" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-vvksk webserver-deployment-595b5b9587- deployment-1406 /api/v1/namespaces/deployment-1406/pods/webserver-deployment-595b5b9587-vvksk fbbc08a5-ff60-44c8-ab56-121249993c92 3604103 0 2020-10-06 20:33:16 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f1c681ff-9e98-4b0e-9746-5b6b1c457169 0x40033f3727 0x40033f3728}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qsx7q,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qsx7q,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qsx7q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.9,PodIP:10.244.2.190,StartTime:2020-10-06 20:33:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-10-06 20:33:24 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://95dd587a9343ea4704e2d3d8868205b8ad1653b7b2a40d70aaa28bbed3b986d8,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.190,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Oct  6 20:33:36.394: INFO: Pod "webserver-deployment-595b5b9587-w9v75" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-w9v75 webserver-deployment-595b5b9587- deployment-1406 /api/v1/namespaces/deployment-1406/pods/webserver-deployment-595b5b9587-w9v75 215fe3b5-41f6-4507-aaa2-0f7d4db72ab8 3604297 0 2020-10-06 20:33:31 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f1c681ff-9e98-4b0e-9746-5b6b1c457169 0x40033f38a7 0x40033f38a8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qsx7q,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qsx7q,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qsx7q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.10,PodIP:,StartTime:2020-10-06 20:33:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Oct  6 20:33:36.395: INFO: Pod "webserver-deployment-595b5b9587-x2gdc" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-x2gdc webserver-deployment-595b5b9587- deployment-1406 /api/v1/namespaces/deployment-1406/pods/webserver-deployment-595b5b9587-x2gdc 450eea0a-e4b1-4fea-9463-3bad6f6161b5 3604095 0 2020-10-06 20:33:16 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f1c681ff-9e98-4b0e-9746-5b6b1c457169 0x40033f3a07 0x40033f3a08}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qsx7q,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qsx7q,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qsx7q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.9,PodIP:10.244.2.191,StartTime:2020-10-06 20:33:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-10-06 20:33:24 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://2d8689518c744de18eb749f3772f7380b6f7363305a9f1c0a91a5e7ff7d42962,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.191,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Oct  6 20:33:36.395: INFO: Pod "webserver-deployment-595b5b9587-x6fb2" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-x6fb2 webserver-deployment-595b5b9587- deployment-1406 /api/v1/namespaces/deployment-1406/pods/webserver-deployment-595b5b9587-x6fb2 da92b2a9-3c90-44e6-ae22-d2f43ab57986 3604123 0 2020-10-06 20:33:16 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f1c681ff-9e98-4b0e-9746-5b6b1c457169 0x40033f3b87 0x40033f3b88}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qsx7q,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qsx7q,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qsx7q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.9,PodIP:10.244.2.192,StartTime:2020-10-06 20:33:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-10-06 20:33:26 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://ef8c9bfc79ecbbe40fcdc5c2cac30e3a6ce3ac8001cc5b2f541d553e43c8a055,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.192,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Oct  6 20:33:36.397: INFO: Pod "webserver-deployment-595b5b9587-zzg8x" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-zzg8x webserver-deployment-595b5b9587- deployment-1406 /api/v1/namespaces/deployment-1406/pods/webserver-deployment-595b5b9587-zzg8x d16963a9-82cb-4b43-a999-3f86ce97eba4 3604254 0 2020-10-06 20:33:30 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f1c681ff-9e98-4b0e-9746-5b6b1c457169 0x40033f3d07 0x40033f3d08}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qsx7q,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qsx7q,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qsx7q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.10,PodIP:,StartTime:2020-10-06 20:33:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Oct  6 20:33:36.399: INFO: Pod "webserver-deployment-c7997dcc8-4fd4r" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-4fd4r webserver-deployment-c7997dcc8- deployment-1406 /api/v1/namespaces/deployment-1406/pods/webserver-deployment-c7997dcc8-4fd4r 949082f3-c010-4606-a2c1-8ae052ce8be0 3604315 0 2020-10-06 20:33:31 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 c2c11033-0153-40e7-bbb1-8cc4e87067be 0x40033f3e67 0x40033f3e68}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qsx7q,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qsx7q,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qsx7q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.9,PodIP:,StartTime:2020-10-06 20:33:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Oct  6 20:33:36.401: INFO: Pod "webserver-deployment-c7997dcc8-5khdx" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-5khdx webserver-deployment-c7997dcc8- deployment-1406 /api/v1/namespaces/deployment-1406/pods/webserver-deployment-c7997dcc8-5khdx f00a8291-23c7-4e9c-9ba1-b87f3fbb2aac 3604313 0 2020-10-06 20:33:31 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 c2c11033-0153-40e7-bbb1-8cc4e87067be 0x40033f3fe7 0x40033f3fe8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qsx7q,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qsx7q,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qsx7q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.10,PodIP:,StartTime:2020-10-06 20:33:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Oct  6 20:33:36.403: INFO: Pod "webserver-deployment-c7997dcc8-5smn6" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-5smn6 webserver-deployment-c7997dcc8- deployment-1406 /api/v1/namespaces/deployment-1406/pods/webserver-deployment-c7997dcc8-5smn6 4b3485e5-d231-4cd6-8895-4f4c657c304c 3604341 0 2020-10-06 20:33:28 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 c2c11033-0153-40e7-bbb1-8cc4e87067be 0x4000eb0317 0x4000eb0318}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qsx7q,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qsx7q,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qsx7q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:28 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:28 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.10,PodIP:10.244.1.43,StartTime:2020-10-06 20:33:28 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.43,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Oct  6 20:33:36.405: INFO: Pod "webserver-deployment-c7997dcc8-6qfgr" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-6qfgr webserver-deployment-c7997dcc8- deployment-1406 /api/v1/namespaces/deployment-1406/pods/webserver-deployment-c7997dcc8-6qfgr 09ab4e1f-29fc-45a4-970e-0a090a264da4 3604284 0 2020-10-06 20:33:31 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 c2c11033-0153-40e7-bbb1-8cc4e87067be 0x4000eb05e7 0x4000eb05e8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qsx7q,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qsx7q,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qsx7q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.10,PodIP:,StartTime:2020-10-06 20:33:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Oct  6 20:33:36.407: INFO: Pod "webserver-deployment-c7997dcc8-7w7vb" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-7w7vb webserver-deployment-c7997dcc8- deployment-1406 /api/v1/namespaces/deployment-1406/pods/webserver-deployment-c7997dcc8-7w7vb 2eb6e6b4-ea84-4d97-8e8b-8c9809d184fe 3604186 0 2020-10-06 20:33:28 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 c2c11033-0153-40e7-bbb1-8cc4e87067be 0x4000eb07f7 0x4000eb07f8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qsx7q,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qsx7q,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qsx7q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:29 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:29 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.9,PodIP:,StartTime:2020-10-06 20:33:29 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Oct  6 20:33:36.408: INFO: Pod "webserver-deployment-c7997dcc8-d7t8s" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-d7t8s webserver-deployment-c7997dcc8- deployment-1406 /api/v1/namespaces/deployment-1406/pods/webserver-deployment-c7997dcc8-d7t8s 77f8b934-fbdb-48d0-b107-d933da2a5dba 3604291 0 2020-10-06 20:33:31 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 c2c11033-0153-40e7-bbb1-8cc4e87067be 0x4000eb0987 0x4000eb0988}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qsx7q,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qsx7q,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qsx7q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.9,PodIP:,StartTime:2020-10-06 20:33:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Oct  6 20:33:36.410: INFO: Pod "webserver-deployment-c7997dcc8-hqbtf" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-hqbtf webserver-deployment-c7997dcc8- deployment-1406 /api/v1/namespaces/deployment-1406/pods/webserver-deployment-c7997dcc8-hqbtf f7441ef7-33b5-4c0f-bd68-1556a0621ca5 3604178 0 2020-10-06 20:33:28 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 c2c11033-0153-40e7-bbb1-8cc4e87067be 0x4000eb0b07 0x4000eb0b08}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qsx7q,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qsx7q,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qsx7q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:28 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:28 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.9,PodIP:,StartTime:2020-10-06 20:33:28 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Oct  6 20:33:36.411: INFO: Pod "webserver-deployment-c7997dcc8-l6qx5" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-l6qx5 webserver-deployment-c7997dcc8- deployment-1406 /api/v1/namespaces/deployment-1406/pods/webserver-deployment-c7997dcc8-l6qx5 f012178d-86a1-4748-bf28-92ddb61ddb19 3604318 0 2020-10-06 20:33:31 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 c2c11033-0153-40e7-bbb1-8cc4e87067be 0x4000eb0c97 0x4000eb0c98}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qsx7q,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qsx7q,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qsx7q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.10,PodIP:,StartTime:2020-10-06 20:33:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Oct  6 20:33:36.412: INFO: Pod "webserver-deployment-c7997dcc8-l88kt" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-l88kt webserver-deployment-c7997dcc8- deployment-1406 /api/v1/namespaces/deployment-1406/pods/webserver-deployment-c7997dcc8-l88kt cbf17a3a-4384-4651-bf97-caef0fa2846d 3604319 0 2020-10-06 20:33:31 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 c2c11033-0153-40e7-bbb1-8cc4e87067be 0x4000eb0e47 0x4000eb0e48}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qsx7q,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qsx7q,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qsx7q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.9,PodIP:,StartTime:2020-10-06 20:33:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Oct  6 20:33:36.413: INFO: Pod "webserver-deployment-c7997dcc8-m246r" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-m246r webserver-deployment-c7997dcc8- deployment-1406 /api/v1/namespaces/deployment-1406/pods/webserver-deployment-c7997dcc8-m246r 89dd9775-f07c-46e9-ad0e-78d86e2ae4f2 3604298 0 2020-10-06 20:33:31 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 c2c11033-0153-40e7-bbb1-8cc4e87067be 0x4000eb0fd7 0x4000eb0fd8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qsx7q,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qsx7q,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qsx7q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.9,PodIP:,StartTime:2020-10-06 20:33:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Oct  6 20:33:36.414: INFO: Pod "webserver-deployment-c7997dcc8-nj2g7" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-nj2g7 webserver-deployment-c7997dcc8- deployment-1406 /api/v1/namespaces/deployment-1406/pods/webserver-deployment-c7997dcc8-nj2g7 e5059385-e791-4ead-a27a-ade57f613f2a 3604269 0 2020-10-06 20:33:31 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 c2c11033-0153-40e7-bbb1-8cc4e87067be 0x4000eb1157 0x4000eb1158}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qsx7q,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qsx7q,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qsx7q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.10,PodIP:,StartTime:2020-10-06 20:33:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Oct  6 20:33:36.415: INFO: Pod "webserver-deployment-c7997dcc8-pgffh" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-pgffh webserver-deployment-c7997dcc8- deployment-1406 /api/v1/namespaces/deployment-1406/pods/webserver-deployment-c7997dcc8-pgffh 10483d3e-529b-4c62-999b-de91bae83541 3604325 0 2020-10-06 20:33:28 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 c2c11033-0153-40e7-bbb1-8cc4e87067be 0x4000eb12d7 0x4000eb12d8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qsx7q,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qsx7q,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qsx7q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:28 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:28 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.10,PodIP:10.244.1.42,StartTime:2020-10-06 20:33:28 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.42,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Oct  6 20:33:36.416: INFO: Pod "webserver-deployment-c7997dcc8-qrzp8" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-qrzp8 webserver-deployment-c7997dcc8- deployment-1406 /api/v1/namespaces/deployment-1406/pods/webserver-deployment-c7997dcc8-qrzp8 ca300ac8-a909-41f7-87e6-4687dcb378b7 3604328 0 2020-10-06 20:33:28 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 c2c11033-0153-40e7-bbb1-8cc4e87067be 0x4000eb1497 0x4000eb1498}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qsx7q,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qsx7q,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qsx7q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:28 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:28 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:33:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.9,PodIP:10.244.2.196,StartTime:2020-10-06 20:33:28 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.196,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 20:33:36.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-1406" for this suite.

• [SLOW TEST:20.057 seconds]
[sig-apps] Deployment
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":278,"completed":88,"skipped":1419,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 20:33:36.428: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Oct  6 20:33:36.821: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-2768 /api/v1/namespaces/watch-2768/configmaps/e2e-watch-test-watch-closed 91d6ea24-422d-4ba7-80bc-868220598911 3604348 0 2020-10-06 20:33:36 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Oct  6 20:33:36.822: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-2768 /api/v1/namespaces/watch-2768/configmaps/e2e-watch-test-watch-closed 91d6ea24-422d-4ba7-80bc-868220598911 3604349 0 2020-10-06 20:33:36 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Oct  6 20:33:38.410: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-2768 /api/v1/namespaces/watch-2768/configmaps/e2e-watch-test-watch-closed 91d6ea24-422d-4ba7-80bc-868220598911 3604351 0 2020-10-06 20:33:36 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Oct  6 20:33:38.411: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-2768 /api/v1/namespaces/watch-2768/configmaps/e2e-watch-test-watch-closed 91d6ea24-422d-4ba7-80bc-868220598911 3604355 0 2020-10-06 20:33:36 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 20:33:38.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-2768" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":278,"completed":89,"skipped":1432,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] ConfigMap
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 20:33:38.647: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap configmap-7582/configmap-test-ae59eac8-1ffb-435b-9308-154162711d89
STEP: Creating a pod to test consume configMaps
Oct  6 20:33:39.103: INFO: Waiting up to 5m0s for pod "pod-configmaps-d9130157-299c-4989-aa43-ae7ab162f0ac" in namespace "configmap-7582" to be "success or failure"
Oct  6 20:33:39.440: INFO: Pod "pod-configmaps-d9130157-299c-4989-aa43-ae7ab162f0ac": Phase="Pending", Reason="", readiness=false. Elapsed: 336.448145ms
Oct  6 20:33:41.781: INFO: Pod "pod-configmaps-d9130157-299c-4989-aa43-ae7ab162f0ac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.678036551s
Oct  6 20:33:44.370: INFO: Pod "pod-configmaps-d9130157-299c-4989-aa43-ae7ab162f0ac": Phase="Pending", Reason="", readiness=false. Elapsed: 5.267029559s
Oct  6 20:33:46.602: INFO: Pod "pod-configmaps-d9130157-299c-4989-aa43-ae7ab162f0ac": Phase="Pending", Reason="", readiness=false. Elapsed: 7.499075843s
Oct  6 20:33:48.615: INFO: Pod "pod-configmaps-d9130157-299c-4989-aa43-ae7ab162f0ac": Phase="Pending", Reason="", readiness=false. Elapsed: 9.511999238s
Oct  6 20:33:50.629: INFO: Pod "pod-configmaps-d9130157-299c-4989-aa43-ae7ab162f0ac": Phase="Pending", Reason="", readiness=false. Elapsed: 11.526062135s
Oct  6 20:33:52.637: INFO: Pod "pod-configmaps-d9130157-299c-4989-aa43-ae7ab162f0ac": Phase="Running", Reason="", readiness=true. Elapsed: 13.533718942s
Oct  6 20:33:54.643: INFO: Pod "pod-configmaps-d9130157-299c-4989-aa43-ae7ab162f0ac": Phase="Running", Reason="", readiness=true. Elapsed: 15.539984383s
Oct  6 20:33:56.662: INFO: Pod "pod-configmaps-d9130157-299c-4989-aa43-ae7ab162f0ac": Phase="Running", Reason="", readiness=true. Elapsed: 17.559159728s
Oct  6 20:33:58.668: INFO: Pod "pod-configmaps-d9130157-299c-4989-aa43-ae7ab162f0ac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.5649709s
STEP: Saw pod success
Oct  6 20:33:58.668: INFO: Pod "pod-configmaps-d9130157-299c-4989-aa43-ae7ab162f0ac" satisfied condition "success or failure"
Oct  6 20:33:58.673: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-d9130157-299c-4989-aa43-ae7ab162f0ac container env-test: 
STEP: delete the pod
Oct  6 20:33:58.758: INFO: Waiting for pod pod-configmaps-d9130157-299c-4989-aa43-ae7ab162f0ac to disappear
Oct  6 20:33:58.805: INFO: Pod pod-configmaps-d9130157-299c-4989-aa43-ae7ab162f0ac no longer exists
[AfterEach] [sig-node] ConfigMap
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 20:33:58.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7582" for this suite.

• [SLOW TEST:20.217 seconds]
[sig-node] ConfigMap
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":278,"completed":90,"skipped":1490,"failed":0}
SS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 20:33:58.865: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Oct  6 20:33:59.014: INFO: Waiting up to 5m0s for pod "downwardapi-volume-66a4ca5c-bed3-459f-b3f7-2e0426653da7" in namespace "projected-9263" to be "success or failure"
Oct  6 20:33:59.025: INFO: Pod "downwardapi-volume-66a4ca5c-bed3-459f-b3f7-2e0426653da7": Phase="Pending", Reason="", readiness=false. Elapsed: 10.652007ms
Oct  6 20:34:01.044: INFO: Pod "downwardapi-volume-66a4ca5c-bed3-459f-b3f7-2e0426653da7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029727867s
Oct  6 20:34:03.050: INFO: Pod "downwardapi-volume-66a4ca5c-bed3-459f-b3f7-2e0426653da7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036469402s
STEP: Saw pod success
Oct  6 20:34:03.051: INFO: Pod "downwardapi-volume-66a4ca5c-bed3-459f-b3f7-2e0426653da7" satisfied condition "success or failure"
Oct  6 20:34:03.075: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-66a4ca5c-bed3-459f-b3f7-2e0426653da7 container client-container: 
STEP: delete the pod
Oct  6 20:34:03.096: INFO: Waiting for pod downwardapi-volume-66a4ca5c-bed3-459f-b3f7-2e0426653da7 to disappear
Oct  6 20:34:03.116: INFO: Pod downwardapi-volume-66a4ca5c-bed3-459f-b3f7-2e0426653da7 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 20:34:03.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9263" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":91,"skipped":1492,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 20:34:03.131: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-d19b7848-7f75-423a-a008-f3e7aa813ffe
STEP: Creating a pod to test consume secrets
Oct  6 20:34:03.361: INFO: Waiting up to 5m0s for pod "pod-secrets-b75bdee8-da55-4170-923c-82aa19542f00" in namespace "secrets-1550" to be "success or failure"
Oct  6 20:34:03.371: INFO: Pod "pod-secrets-b75bdee8-da55-4170-923c-82aa19542f00": Phase="Pending", Reason="", readiness=false. Elapsed: 9.183991ms
Oct  6 20:34:05.377: INFO: Pod "pod-secrets-b75bdee8-da55-4170-923c-82aa19542f00": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015435787s
Oct  6 20:34:07.384: INFO: Pod "pod-secrets-b75bdee8-da55-4170-923c-82aa19542f00": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022443725s
STEP: Saw pod success
Oct  6 20:34:07.385: INFO: Pod "pod-secrets-b75bdee8-da55-4170-923c-82aa19542f00" satisfied condition "success or failure"
Oct  6 20:34:07.390: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-b75bdee8-da55-4170-923c-82aa19542f00 container secret-volume-test: 
STEP: delete the pod
Oct  6 20:34:07.465: INFO: Waiting for pod pod-secrets-b75bdee8-da55-4170-923c-82aa19542f00 to disappear
Oct  6 20:34:07.479: INFO: Pod pod-secrets-b75bdee8-da55-4170-923c-82aa19542f00 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 20:34:07.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1550" for this suite.
STEP: Destroying namespace "secret-namespace-2011" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":278,"completed":92,"skipped":1509,"failed":0}
SS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 20:34:07.498: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Oct  6 20:34:07.597: INFO: Waiting up to 5m0s for pod "downward-api-64d2202a-03ac-4faf-b10d-e1a55a7af011" in namespace "downward-api-3626" to be "success or failure"
Oct  6 20:34:07.605: INFO: Pod "downward-api-64d2202a-03ac-4faf-b10d-e1a55a7af011": Phase="Pending", Reason="", readiness=false. Elapsed: 7.619641ms
Oct  6 20:34:09.612: INFO: Pod "downward-api-64d2202a-03ac-4faf-b10d-e1a55a7af011": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014458144s
Oct  6 20:34:11.619: INFO: Pod "downward-api-64d2202a-03ac-4faf-b10d-e1a55a7af011": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021986941s
STEP: Saw pod success
Oct  6 20:34:11.619: INFO: Pod "downward-api-64d2202a-03ac-4faf-b10d-e1a55a7af011" satisfied condition "success or failure"
Oct  6 20:34:11.624: INFO: Trying to get logs from node jerma-worker pod downward-api-64d2202a-03ac-4faf-b10d-e1a55a7af011 container dapi-container: 
STEP: delete the pod
Oct  6 20:34:11.651: INFO: Waiting for pod downward-api-64d2202a-03ac-4faf-b10d-e1a55a7af011 to disappear
Oct  6 20:34:11.655: INFO: Pod downward-api-64d2202a-03ac-4faf-b10d-e1a55a7af011 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 20:34:11.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3626" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":278,"completed":93,"skipped":1511,"failed":0}
SSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 20:34:11.676: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Oct  6 20:34:15.870: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 20:34:15.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-4736" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":94,"skipped":1520,"failed":0}
SSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 20:34:15.932: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Oct  6 20:34:20.537: INFO: Successfully updated pod "pod-update-a6b2ee79-9c47-4aff-9c8d-4cfd325c76d6"
STEP: verifying the updated pod is in kubernetes
Oct  6 20:34:20.564: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 20:34:20.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-4730" for this suite.
•{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":278,"completed":95,"skipped":1529,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 20:34:20.581: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ReplicationController
STEP: Ensuring resource quota status captures replication controller creation
STEP: Deleting a ReplicationController
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 20:34:31.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-5950" for this suite.

• [SLOW TEST:11.156 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":278,"completed":96,"skipped":1540,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 20:34:31.739: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-aacd396c-e186-4596-a660-569086e327bb
STEP: Creating a pod to test consume configMaps
Oct  6 20:34:31.869: INFO: Waiting up to 5m0s for pod "pod-configmaps-a826e284-b7c5-4af8-9082-718b0ed6b364" in namespace "configmap-1699" to be "success or failure"
Oct  6 20:34:31.899: INFO: Pod "pod-configmaps-a826e284-b7c5-4af8-9082-718b0ed6b364": Phase="Pending", Reason="", readiness=false. Elapsed: 30.319835ms
Oct  6 20:34:33.906: INFO: Pod "pod-configmaps-a826e284-b7c5-4af8-9082-718b0ed6b364": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037414007s
Oct  6 20:34:35.913: INFO: Pod "pod-configmaps-a826e284-b7c5-4af8-9082-718b0ed6b364": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044090272s
Oct  6 20:34:37.920: INFO: Pod "pod-configmaps-a826e284-b7c5-4af8-9082-718b0ed6b364": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.051094832s
STEP: Saw pod success
Oct  6 20:34:37.921: INFO: Pod "pod-configmaps-a826e284-b7c5-4af8-9082-718b0ed6b364" satisfied condition "success or failure"
Oct  6 20:34:37.926: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-a826e284-b7c5-4af8-9082-718b0ed6b364 container configmap-volume-test: 
STEP: delete the pod
Oct  6 20:34:37.956: INFO: Waiting for pod pod-configmaps-a826e284-b7c5-4af8-9082-718b0ed6b364 to disappear
Oct  6 20:34:37.973: INFO: Pod pod-configmaps-a826e284-b7c5-4af8-9082-718b0ed6b364 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 20:34:37.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1699" for this suite.

• [SLOW TEST:6.248 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":97,"skipped":1562,"failed":0}
S
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 20:34:37.988: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Oct  6 20:34:41.125: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Oct  6 20:34:43.310: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737613281, loc:(*time.Location)(0x7271fa0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737613281, loc:(*time.Location)(0x7271fa0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737613281, loc:(*time.Location)(0x7271fa0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737613281, loc:(*time.Location)(0x7271fa0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Oct  6 20:34:46.399: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Oct  6 20:34:46.405: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-713-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource that should be mutated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 20:34:47.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4404" for this suite.
STEP: Destroying namespace "webhook-4404-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:9.231 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":278,"completed":98,"skipped":1563,"failed":0}
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 20:34:47.220: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-map-2f4833e6-734e-4cbb-99ad-c27be47fdc75
STEP: Creating a pod to test consume secrets
Oct  6 20:34:47.361: INFO: Waiting up to 5m0s for pod "pod-secrets-2cc74ef7-e155-48e7-b281-d888c9d7af86" in namespace "secrets-2835" to be "success or failure"
Oct  6 20:34:47.373: INFO: Pod "pod-secrets-2cc74ef7-e155-48e7-b281-d888c9d7af86": Phase="Pending", Reason="", readiness=false. Elapsed: 11.899211ms
Oct  6 20:34:49.379: INFO: Pod "pod-secrets-2cc74ef7-e155-48e7-b281-d888c9d7af86": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017986771s
Oct  6 20:34:51.385: INFO: Pod "pod-secrets-2cc74ef7-e155-48e7-b281-d888c9d7af86": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023696603s
STEP: Saw pod success
Oct  6 20:34:51.385: INFO: Pod "pod-secrets-2cc74ef7-e155-48e7-b281-d888c9d7af86" satisfied condition "success or failure"
Oct  6 20:34:51.389: INFO: Trying to get logs from node jerma-worker pod pod-secrets-2cc74ef7-e155-48e7-b281-d888c9d7af86 container secret-volume-test: 
STEP: delete the pod
Oct  6 20:34:51.484: INFO: Waiting for pod pod-secrets-2cc74ef7-e155-48e7-b281-d888c9d7af86 to disappear
Oct  6 20:34:51.489: INFO: Pod pod-secrets-2cc74ef7-e155-48e7-b281-d888c9d7af86 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 20:34:51.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2835" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":99,"skipped":1563,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 20:34:51.503: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Kubectl replace
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1796
[It] should update a single-container pod's image  [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Oct  6 20:34:51.640: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-4979'
Oct  6 20:34:52.969: INFO: stderr: ""
Oct  6 20:34:52.969: INFO: stdout: "pod/e2e-test-httpd-pod created\n"
STEP: verifying the pod e2e-test-httpd-pod is running
STEP: verifying the pod e2e-test-httpd-pod was created
Oct  6 20:34:58.022: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-4979 -o json'
Oct  6 20:34:59.260: INFO: stderr: ""
Oct  6 20:34:59.261: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-10-06T20:34:52Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-httpd-pod\"\n        },\n        \"name\": \"e2e-test-httpd-pod\",\n        \"namespace\": \"kubectl-4979\",\n        \"resourceVersion\": \"3605075\",\n        \"selfLink\": \"/api/v1/namespaces/kubectl-4979/pods/e2e-test-httpd-pod\",\n        \"uid\": \"53a7ac8f-09a8-4755-b160-c645c054ea99\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-httpd-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-fc79d\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"jerma-worker\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-fc79d\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-fc79d\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-10-06T20:34:52Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-10-06T20:34:55Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-10-06T20:34:55Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-10-06T20:34:52Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"containerd://3215ce1e2707ccc890eac8903a1ef028afed785d13f7ed9c6626ff208247888b\",\n                \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n                \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-httpd-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"started\": true,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-10-06T20:34:55Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"172.18.0.9\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.244.2.212\",\n        \"podIPs\": [\n            {\n                \"ip\": \"10.244.2.212\"\n            }\n        ],\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-10-06T20:34:52Z\"\n    }\n}\n"
STEP: replace the image in the pod
Oct  6 20:34:59.262: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-4979'
Oct  6 20:35:00.809: INFO: stderr: ""
Oct  6 20:35:00.809: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n"
STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29
[AfterEach] Kubectl replace
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1801
Oct  6 20:35:00.825: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-4979'
Oct  6 20:35:04.768: INFO: stderr: ""
Oct  6 20:35:04.769: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 20:35:04.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4979" for this suite.

• [SLOW TEST:13.281 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl replace
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1792
    should update a single-container pod's image  [Conformance]
    /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image  [Conformance]","total":278,"completed":100,"skipped":1583,"failed":0}
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 20:35:04.786: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0777 on node default medium
Oct  6 20:35:04.840: INFO: Waiting up to 5m0s for pod "pod-b6b7b300-b916-429b-97ee-c215f451c07f" in namespace "emptydir-494" to be "success or failure"
Oct  6 20:35:04.889: INFO: Pod "pod-b6b7b300-b916-429b-97ee-c215f451c07f": Phase="Pending", Reason="", readiness=false. Elapsed: 48.015576ms
Oct  6 20:35:06.896: INFO: Pod "pod-b6b7b300-b916-429b-97ee-c215f451c07f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055380145s
Oct  6 20:35:08.903: INFO: Pod "pod-b6b7b300-b916-429b-97ee-c215f451c07f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.062666816s
STEP: Saw pod success
Oct  6 20:35:08.903: INFO: Pod "pod-b6b7b300-b916-429b-97ee-c215f451c07f" satisfied condition "success or failure"
Oct  6 20:35:08.908: INFO: Trying to get logs from node jerma-worker pod pod-b6b7b300-b916-429b-97ee-c215f451c07f container test-container: 
STEP: delete the pod
Oct  6 20:35:08.962: INFO: Waiting for pod pod-b6b7b300-b916-429b-97ee-c215f451c07f to disappear
Oct  6 20:35:09.009: INFO: Pod pod-b6b7b300-b916-429b-97ee-c215f451c07f no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 20:35:09.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-494" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":101,"skipped":1586,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate configmap [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 20:35:09.029: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Oct  6 20:35:14.894: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Oct  6 20:35:16.911: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737613314, loc:(*time.Location)(0x7271fa0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737613314, loc:(*time.Location)(0x7271fa0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737613314, loc:(*time.Location)(0x7271fa0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737613314, loc:(*time.Location)(0x7271fa0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Oct  6 20:35:19.947: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate configmap [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the mutating configmap webhook via the AdmissionRegistration API
STEP: create a configmap that should be updated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 20:35:20.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7600" for this suite.
STEP: Destroying namespace "webhook-7600-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:11.150 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate configmap [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":278,"completed":102,"skipped":1621,"failed":0}
SSSSSS
------------------------------
[k8s.io] Security Context when creating containers with AllowPrivilegeEscalation 
  should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Security Context
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 20:35:20.180: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39
[It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Oct  6 20:35:20.269: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-f60b141e-638c-47ef-94b3-a34a634431e5" in namespace "security-context-test-4253" to be "success or failure"
Oct  6 20:35:20.278: INFO: Pod "alpine-nnp-false-f60b141e-638c-47ef-94b3-a34a634431e5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.695696ms
Oct  6 20:35:22.297: INFO: Pod "alpine-nnp-false-f60b141e-638c-47ef-94b3-a34a634431e5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027224547s
Oct  6 20:35:24.302: INFO: Pod "alpine-nnp-false-f60b141e-638c-47ef-94b3-a34a634431e5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032954979s
Oct  6 20:35:24.303: INFO: Pod "alpine-nnp-false-f60b141e-638c-47ef-94b3-a34a634431e5" satisfied condition "success or failure"
[AfterEach] [k8s.io] Security Context
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 20:35:24.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-4253" for this suite.
•{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":103,"skipped":1627,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 20:35:24.327: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-8c1fcfa0-34d6-406d-a3f0-ab88f94ee871
STEP: Creating a pod to test consume configMaps
Oct  6 20:35:24.437: INFO: Waiting up to 5m0s for pod "pod-configmaps-b08864e3-ef94-44ae-aa35-ecc094b0f616" in namespace "configmap-9435" to be "success or failure"
Oct  6 20:35:24.489: INFO: Pod "pod-configmaps-b08864e3-ef94-44ae-aa35-ecc094b0f616": Phase="Pending", Reason="", readiness=false. Elapsed: 51.993792ms
Oct  6 20:35:26.495: INFO: Pod "pod-configmaps-b08864e3-ef94-44ae-aa35-ecc094b0f616": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057922378s
Oct  6 20:35:28.548: INFO: Pod "pod-configmaps-b08864e3-ef94-44ae-aa35-ecc094b0f616": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.111124758s
STEP: Saw pod success
Oct  6 20:35:28.549: INFO: Pod "pod-configmaps-b08864e3-ef94-44ae-aa35-ecc094b0f616" satisfied condition "success or failure"
Oct  6 20:35:28.680: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-b08864e3-ef94-44ae-aa35-ecc094b0f616 container configmap-volume-test: 
STEP: delete the pod
Oct  6 20:35:28.719: INFO: Waiting for pod pod-configmaps-b08864e3-ef94-44ae-aa35-ecc094b0f616 to disappear
Oct  6 20:35:28.727: INFO: Pod pod-configmaps-b08864e3-ef94-44ae-aa35-ecc094b0f616 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 20:35:28.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9435" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":104,"skipped":1653,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 20:35:28.742: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-d1e9e8d4-4d14-4094-a729-6b955acb0660
STEP: Creating a pod to test consume secrets
Oct  6 20:35:28.844: INFO: Waiting up to 5m0s for pod "pod-secrets-e5b722e8-ca7c-4ab2-8605-57bda297d62f" in namespace "secrets-6087" to be "success or failure"
Oct  6 20:35:28.874: INFO: Pod "pod-secrets-e5b722e8-ca7c-4ab2-8605-57bda297d62f": Phase="Pending", Reason="", readiness=false. Elapsed: 29.326165ms
Oct  6 20:35:30.881: INFO: Pod "pod-secrets-e5b722e8-ca7c-4ab2-8605-57bda297d62f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036643819s
Oct  6 20:35:32.980: INFO: Pod "pod-secrets-e5b722e8-ca7c-4ab2-8605-57bda297d62f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.135066358s
STEP: Saw pod success
Oct  6 20:35:32.980: INFO: Pod "pod-secrets-e5b722e8-ca7c-4ab2-8605-57bda297d62f" satisfied condition "success or failure"
Oct  6 20:35:32.993: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-e5b722e8-ca7c-4ab2-8605-57bda297d62f container secret-volume-test: 
STEP: delete the pod
Oct  6 20:35:33.033: INFO: Waiting for pod pod-secrets-e5b722e8-ca7c-4ab2-8605-57bda297d62f to disappear
Oct  6 20:35:33.052: INFO: Pod pod-secrets-e5b722e8-ca7c-4ab2-8605-57bda297d62f no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 20:35:33.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-6087" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":105,"skipped":1666,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 20:35:33.067: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-map-4652c5ae-00f0-436f-a3f9-c149ed4bba8c
STEP: Creating a pod to test consume configMaps
Oct  6 20:35:33.232: INFO: Waiting up to 5m0s for pod "pod-configmaps-1aae9970-8341-436a-81ea-98f297f14be2" in namespace "configmap-5387" to be "success or failure"
Oct  6 20:35:33.253: INFO: Pod "pod-configmaps-1aae9970-8341-436a-81ea-98f297f14be2": Phase="Pending", Reason="", readiness=false. Elapsed: 20.760373ms
Oct  6 20:35:35.440: INFO: Pod "pod-configmaps-1aae9970-8341-436a-81ea-98f297f14be2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.208686072s
Oct  6 20:35:37.447: INFO: Pod "pod-configmaps-1aae9970-8341-436a-81ea-98f297f14be2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.215632101s
STEP: Saw pod success
Oct  6 20:35:37.448: INFO: Pod "pod-configmaps-1aae9970-8341-436a-81ea-98f297f14be2" satisfied condition "success or failure"
Oct  6 20:35:37.453: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-1aae9970-8341-436a-81ea-98f297f14be2 container configmap-volume-test: 
STEP: delete the pod
Oct  6 20:35:37.502: INFO: Waiting for pod pod-configmaps-1aae9970-8341-436a-81ea-98f297f14be2 to disappear
Oct  6 20:35:37.519: INFO: Pod pod-configmaps-1aae9970-8341-436a-81ea-98f297f14be2 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 20:35:37.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5387" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":106,"skipped":1685,"failed":0}
SSS
------------------------------
[k8s.io] Security Context When creating a pod with readOnlyRootFilesystem 
  should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Security Context
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 20:35:37.556: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39
[It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Oct  6 20:35:37.643: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-124326f2-7db3-4174-9f3b-55a9e71ea2df" in namespace "security-context-test-4559" to be "success or failure"
Oct  6 20:35:37.651: INFO: Pod "busybox-readonly-false-124326f2-7db3-4174-9f3b-55a9e71ea2df": Phase="Pending", Reason="", readiness=false. Elapsed: 8.670288ms
Oct  6 20:35:39.722: INFO: Pod "busybox-readonly-false-124326f2-7db3-4174-9f3b-55a9e71ea2df": Phase="Pending", Reason="", readiness=false. Elapsed: 2.079360744s
Oct  6 20:35:41.729: INFO: Pod "busybox-readonly-false-124326f2-7db3-4174-9f3b-55a9e71ea2df": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.08595605s
Oct  6 20:35:41.729: INFO: Pod "busybox-readonly-false-124326f2-7db3-4174-9f3b-55a9e71ea2df" satisfied condition "success or failure"
[AfterEach] [k8s.io] Security Context
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 20:35:41.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-4559" for this suite.
•{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":278,"completed":107,"skipped":1688,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Networking
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 20:35:41.751: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Performing setup for networking test in namespace pod-network-test-6800
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Oct  6 20:35:41.859: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Oct  6 20:36:10.040: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.215 8081 | grep -v '^\s*$'] Namespace:pod-network-test-6800 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Oct  6 20:36:10.040: INFO: >>> kubeConfig: /root/.kube/config
I1006 20:36:10.107658       7 log.go:172] (0x4002c06580) (0x4000ac6460) Create stream
I1006 20:36:10.107815       7 log.go:172] (0x4002c06580) (0x4000ac6460) Stream added, broadcasting: 1
I1006 20:36:10.111429       7 log.go:172] (0x4002c06580) Reply frame received for 1
I1006 20:36:10.111624       7 log.go:172] (0x4002c06580) (0x4002344500) Create stream
I1006 20:36:10.111720       7 log.go:172] (0x4002c06580) (0x4002344500) Stream added, broadcasting: 3
I1006 20:36:10.113433       7 log.go:172] (0x4002c06580) Reply frame received for 3
I1006 20:36:10.113560       7 log.go:172] (0x4002c06580) (0x4002344640) Create stream
I1006 20:36:10.113622       7 log.go:172] (0x4002c06580) (0x4002344640) Stream added, broadcasting: 5
I1006 20:36:10.115110       7 log.go:172] (0x4002c06580) Reply frame received for 5
I1006 20:36:11.191436       7 log.go:172] (0x4002c06580) Data frame received for 5
I1006 20:36:11.191726       7 log.go:172] (0x4002344640) (5) Data frame handling
I1006 20:36:11.191917       7 log.go:172] (0x4002c06580) Data frame received for 3
I1006 20:36:11.192141       7 log.go:172] (0x4002344500) (3) Data frame handling
I1006 20:36:11.192394       7 log.go:172] (0x4002344500) (3) Data frame sent
I1006 20:36:11.192532       7 log.go:172] (0x4002c06580) Data frame received for 3
I1006 20:36:11.192677       7 log.go:172] (0x4002344500) (3) Data frame handling
I1006 20:36:11.194308       7 log.go:172] (0x4002c06580) Data frame received for 1
I1006 20:36:11.194455       7 log.go:172] (0x4000ac6460) (1) Data frame handling
I1006 20:36:11.194614       7 log.go:172] (0x4000ac6460) (1) Data frame sent
I1006 20:36:11.194764       7 log.go:172] (0x4002c06580) (0x4000ac6460) Stream removed, broadcasting: 1
I1006 20:36:11.194978       7 log.go:172] (0x4002c06580) Go away received
I1006 20:36:11.195457       7 log.go:172] (0x4002c06580) (0x4000ac6460) Stream removed, broadcasting: 1
I1006 20:36:11.195649       7 log.go:172] (0x4002c06580) (0x4002344500) Stream removed, broadcasting: 3
I1006 20:36:11.195792       7 log.go:172] (0x4002c06580) (0x4002344640) Stream removed, broadcasting: 5
Oct  6 20:36:11.195: INFO: Found all expected endpoints: [netserver-0]
Oct  6 20:36:11.201: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.64 8081 | grep -v '^\s*$'] Namespace:pod-network-test-6800 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Oct  6 20:36:11.201: INFO: >>> kubeConfig: /root/.kube/config
I1006 20:36:11.263854       7 log.go:172] (0x4002cb4420) (0x4002958640) Create stream
I1006 20:36:11.264050       7 log.go:172] (0x4002cb4420) (0x4002958640) Stream added, broadcasting: 1
I1006 20:36:11.268155       7 log.go:172] (0x4002cb4420) Reply frame received for 1
I1006 20:36:11.268340       7 log.go:172] (0x4002cb4420) (0x40023446e0) Create stream
I1006 20:36:11.268429       7 log.go:172] (0x4002cb4420) (0x40023446e0) Stream added, broadcasting: 3
I1006 20:36:11.270456       7 log.go:172] (0x4002cb4420) Reply frame received for 3
I1006 20:36:11.270674       7 log.go:172] (0x4002cb4420) (0x4002344820) Create stream
I1006 20:36:11.270786       7 log.go:172] (0x4002cb4420) (0x4002344820) Stream added, broadcasting: 5
I1006 20:36:11.272394       7 log.go:172] (0x4002cb4420) Reply frame received for 5
I1006 20:36:12.365477       7 log.go:172] (0x4002cb4420) Data frame received for 3
I1006 20:36:12.365701       7 log.go:172] (0x40023446e0) (3) Data frame handling
I1006 20:36:12.365859       7 log.go:172] (0x40023446e0) (3) Data frame sent
I1006 20:36:12.365992       7 log.go:172] (0x4002cb4420) Data frame received for 3
I1006 20:36:12.366148       7 log.go:172] (0x40023446e0) (3) Data frame handling
I1006 20:36:12.366518       7 log.go:172] (0x4002cb4420) Data frame received for 5
I1006 20:36:12.366677       7 log.go:172] (0x4002344820) (5) Data frame handling
I1006 20:36:12.367476       7 log.go:172] (0x4002cb4420) Data frame received for 1
I1006 20:36:12.367661       7 log.go:172] (0x4002958640) (1) Data frame handling
I1006 20:36:12.367810       7 log.go:172] (0x4002958640) (1) Data frame sent
I1006 20:36:12.367951       7 log.go:172] (0x4002cb4420) (0x4002958640) Stream removed, broadcasting: 1
I1006 20:36:12.368123       7 log.go:172] (0x4002cb4420) Go away received
I1006 20:36:12.368443       7 log.go:172] (0x4002cb4420) (0x4002958640) Stream removed, broadcasting: 1
I1006 20:36:12.368552       7 log.go:172] (0x4002cb4420) (0x40023446e0) Stream removed, broadcasting: 3
I1006 20:36:12.368661       7 log.go:172] (0x4002cb4420) (0x4002344820) Stream removed, broadcasting: 5
Oct  6 20:36:12.368: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 20:36:12.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-6800" for this suite.

• [SLOW TEST:30.633 seconds]
[sig-network] Networking
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":108,"skipped":1745,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 20:36:12.385: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Kubectl logs
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358
STEP: creating an pod
Oct  6 20:36:12.476: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run logs-generator --generator=run-pod/v1 --image=gcr.io/kubernetes-e2e-test-images/agnhost:2.8 --namespace=kubectl-3693 -- logs-generator --log-lines-total 100 --run-duration 20s'
Oct  6 20:36:13.754: INFO: stderr: ""
Oct  6 20:36:13.754: INFO: stdout: "pod/logs-generator created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Waiting for log generator to start.
Oct  6 20:36:13.755: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator]
Oct  6 20:36:13.755: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-3693" to be "running and ready, or succeeded"
Oct  6 20:36:13.760: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 4.553349ms
Oct  6 20:36:15.767: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012307774s
Oct  6 20:36:17.776: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.020497984s
Oct  6 20:36:17.776: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded"
Oct  6 20:36:17.776: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator]
STEP: checking for a matching strings
Oct  6 20:36:17.777: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3693'
Oct  6 20:36:19.153: INFO: stderr: ""
Oct  6 20:36:19.153: INFO: stdout: "I1006 20:36:16.138684       1 logs_generator.go:76] 0 PUT /api/v1/namespaces/ns/pods/zhk2 286\nI1006 20:36:16.338910       1 logs_generator.go:76] 1 PUT /api/v1/namespaces/default/pods/ncpw 445\nI1006 20:36:16.538912       1 logs_generator.go:76] 2 GET /api/v1/namespaces/ns/pods/2jr9 263\nI1006 20:36:16.738841       1 logs_generator.go:76] 3 PUT /api/v1/namespaces/kube-system/pods/6dwr 475\nI1006 20:36:16.938944       1 logs_generator.go:76] 4 GET /api/v1/namespaces/default/pods/2fm 533\nI1006 20:36:17.138860       1 logs_generator.go:76] 5 POST /api/v1/namespaces/default/pods/2s2 344\nI1006 20:36:17.338892       1 logs_generator.go:76] 6 GET /api/v1/namespaces/ns/pods/7jgf 449\nI1006 20:36:17.538895       1 logs_generator.go:76] 7 GET /api/v1/namespaces/kube-system/pods/mxl 233\nI1006 20:36:17.738885       1 logs_generator.go:76] 8 POST /api/v1/namespaces/kube-system/pods/r5f 307\nI1006 20:36:17.938882       1 logs_generator.go:76] 9 GET /api/v1/namespaces/kube-system/pods/5db 215\nI1006 20:36:18.138922       1 logs_generator.go:76] 10 POST /api/v1/namespaces/ns/pods/n9f 547\nI1006 20:36:18.338808       1 logs_generator.go:76] 11 GET /api/v1/namespaces/default/pods/442w 483\nI1006 20:36:18.538833       1 logs_generator.go:76] 12 PUT /api/v1/namespaces/ns/pods/c5w 376\nI1006 20:36:18.738847       1 logs_generator.go:76] 13 GET /api/v1/namespaces/ns/pods/76xz 520\nI1006 20:36:18.938837       1 logs_generator.go:76] 14 GET /api/v1/namespaces/kube-system/pods/5w9 290\nI1006 20:36:19.138853       1 logs_generator.go:76] 15 GET /api/v1/namespaces/ns/pods/lwpj 260\n"
STEP: limiting log lines
Oct  6 20:36:19.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3693 --tail=1'
Oct  6 20:36:20.477: INFO: stderr: ""
Oct  6 20:36:20.477: INFO: stdout: "I1006 20:36:20.338996       1 logs_generator.go:76] 21 POST /api/v1/namespaces/kube-system/pods/nmf8 569\n"
Oct  6 20:36:20.477: INFO: got output "I1006 20:36:20.338996       1 logs_generator.go:76] 21 POST /api/v1/namespaces/kube-system/pods/nmf8 569\n"
STEP: limiting log bytes
Oct  6 20:36:20.478: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3693 --limit-bytes=1'
Oct  6 20:36:21.799: INFO: stderr: ""
Oct  6 20:36:21.799: INFO: stdout: "I"
Oct  6 20:36:21.799: INFO: got output "I"
STEP: exposing timestamps
Oct  6 20:36:21.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3693 --tail=1 --timestamps'
Oct  6 20:36:23.069: INFO: stderr: ""
Oct  6 20:36:23.069: INFO: stdout: "2020-10-06T20:36:22.938991871Z I1006 20:36:22.938860       1 logs_generator.go:76] 34 GET /api/v1/namespaces/kube-system/pods/sm2 534\n"
Oct  6 20:36:23.070: INFO: got output "2020-10-06T20:36:22.938991871Z I1006 20:36:22.938860       1 logs_generator.go:76] 34 GET /api/v1/namespaces/kube-system/pods/sm2 534\n"
STEP: restricting to a time range
Oct  6 20:36:25.573: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3693 --since=1s'
Oct  6 20:36:26.866: INFO: stderr: ""
Oct  6 20:36:26.866: INFO: stdout: "I1006 20:36:25.938849       1 logs_generator.go:76] 49 GET /api/v1/namespaces/kube-system/pods/vhh 433\nI1006 20:36:26.138835       1 logs_generator.go:76] 50 POST /api/v1/namespaces/default/pods/tktk 262\nI1006 20:36:26.338834       1 logs_generator.go:76] 51 GET /api/v1/namespaces/default/pods/tgqw 458\nI1006 20:36:26.538852       1 logs_generator.go:76] 52 POST /api/v1/namespaces/default/pods/gc68 305\nI1006 20:36:26.738828       1 logs_generator.go:76] 53 POST /api/v1/namespaces/default/pods/9p6 222\n"
Oct  6 20:36:26.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3693 --since=24h'
Oct  6 20:36:28.130: INFO: stderr: ""
Oct  6 20:36:28.130: INFO: stdout: "I1006 20:36:16.138684       1 logs_generator.go:76] 0 PUT /api/v1/namespaces/ns/pods/zhk2 286\nI1006 20:36:16.338910       1 logs_generator.go:76] 1 PUT /api/v1/namespaces/default/pods/ncpw 445\nI1006 20:36:16.538912       1 logs_generator.go:76] 2 GET /api/v1/namespaces/ns/pods/2jr9 263\nI1006 20:36:16.738841       1 logs_generator.go:76] 3 PUT /api/v1/namespaces/kube-system/pods/6dwr 475\nI1006 20:36:16.938944       1 logs_generator.go:76] 4 GET /api/v1/namespaces/default/pods/2fm 533\nI1006 20:36:17.138860       1 logs_generator.go:76] 5 POST /api/v1/namespaces/default/pods/2s2 344\nI1006 20:36:17.338892       1 logs_generator.go:76] 6 GET /api/v1/namespaces/ns/pods/7jgf 449\nI1006 20:36:17.538895       1 logs_generator.go:76] 7 GET /api/v1/namespaces/kube-system/pods/mxl 233\nI1006 20:36:17.738885       1 logs_generator.go:76] 8 POST /api/v1/namespaces/kube-system/pods/r5f 307\nI1006 20:36:17.938882       1 logs_generator.go:76] 9 GET /api/v1/namespaces/kube-system/pods/5db 215\nI1006 20:36:18.138922       1 logs_generator.go:76] 10 POST /api/v1/namespaces/ns/pods/n9f 547\nI1006 20:36:18.338808       1 logs_generator.go:76] 11 GET /api/v1/namespaces/default/pods/442w 483\nI1006 20:36:18.538833       1 logs_generator.go:76] 12 PUT /api/v1/namespaces/ns/pods/c5w 376\nI1006 20:36:18.738847       1 logs_generator.go:76] 13 GET /api/v1/namespaces/ns/pods/76xz 520\nI1006 20:36:18.938837       1 logs_generator.go:76] 14 GET /api/v1/namespaces/kube-system/pods/5w9 290\nI1006 20:36:19.138853       1 logs_generator.go:76] 15 GET /api/v1/namespaces/ns/pods/lwpj 260\nI1006 20:36:19.338860       1 logs_generator.go:76] 16 GET /api/v1/namespaces/kube-system/pods/lq7 349\nI1006 20:36:19.538851       1 logs_generator.go:76] 17 POST /api/v1/namespaces/ns/pods/n7d 597\nI1006 20:36:19.738839       1 logs_generator.go:76] 18 POST /api/v1/namespaces/kube-system/pods/qjj 409\nI1006 20:36:19.938901       1 logs_generator.go:76] 19 POST /api/v1/namespaces/default/pods/z9r 518\nI1006 20:36:20.138835       1 logs_generator.go:76] 20 PUT /api/v1/namespaces/default/pods/xwfv 474\nI1006 20:36:20.338996       1 logs_generator.go:76] 21 POST /api/v1/namespaces/kube-system/pods/nmf8 569\nI1006 20:36:20.538858       1 logs_generator.go:76] 22 POST /api/v1/namespaces/default/pods/npj 207\nI1006 20:36:20.738850       1 logs_generator.go:76] 23 GET /api/v1/namespaces/kube-system/pods/blf 457\nI1006 20:36:20.938823       1 logs_generator.go:76] 24 POST /api/v1/namespaces/kube-system/pods/db6m 225\nI1006 20:36:21.138857       1 logs_generator.go:76] 25 GET /api/v1/namespaces/ns/pods/2snk 222\nI1006 20:36:21.338860       1 logs_generator.go:76] 26 PUT /api/v1/namespaces/kube-system/pods/4nqr 449\nI1006 20:36:21.538956       1 logs_generator.go:76] 27 PUT /api/v1/namespaces/kube-system/pods/jpbk 475\nI1006 20:36:21.738856       1 logs_generator.go:76] 28 GET /api/v1/namespaces/kube-system/pods/tqsg 418\nI1006 20:36:21.938849       1 logs_generator.go:76] 29 GET /api/v1/namespaces/default/pods/xxf6 305\nI1006 20:36:22.138844       1 logs_generator.go:76] 30 GET /api/v1/namespaces/kube-system/pods/h45 250\nI1006 20:36:22.338837       1 logs_generator.go:76] 31 GET /api/v1/namespaces/ns/pods/wlsn 236\nI1006 20:36:22.538844       1 logs_generator.go:76] 32 PUT /api/v1/namespaces/default/pods/sms 500\nI1006 20:36:22.738870       1 logs_generator.go:76] 33 GET /api/v1/namespaces/ns/pods/r69 409\nI1006 20:36:22.938860       1 logs_generator.go:76] 34 GET /api/v1/namespaces/kube-system/pods/sm2 534\nI1006 20:36:23.138782       1 logs_generator.go:76] 35 GET /api/v1/namespaces/ns/pods/nzd9 213\nI1006 20:36:23.338867       1 logs_generator.go:76] 36 POST /api/v1/namespaces/ns/pods/zqw5 244\nI1006 20:36:23.538981       1 logs_generator.go:76] 37 PUT /api/v1/namespaces/default/pods/jhl 233\nI1006 20:36:23.738888       1 logs_generator.go:76] 38 GET /api/v1/namespaces/default/pods/9kgr 209\nI1006 20:36:23.938886       1 logs_generator.go:76] 39 POST /api/v1/namespaces/default/pods/nxm 291\nI1006 20:36:24.138842       1 logs_generator.go:76] 40 GET /api/v1/namespaces/ns/pods/dd69 522\nI1006 20:36:24.338897       1 logs_generator.go:76] 41 PUT /api/v1/namespaces/kube-system/pods/tzv 539\nI1006 20:36:24.538895       1 logs_generator.go:76] 42 PUT /api/v1/namespaces/default/pods/pgvx 252\nI1006 20:36:24.738836       1 logs_generator.go:76] 43 POST /api/v1/namespaces/ns/pods/txv 339\nI1006 20:36:24.938873       1 logs_generator.go:76] 44 POST /api/v1/namespaces/default/pods/pnvk 526\nI1006 20:36:25.138913       1 logs_generator.go:76] 45 GET /api/v1/namespaces/kube-system/pods/jjf 294\nI1006 20:36:25.338924       1 logs_generator.go:76] 46 GET /api/v1/namespaces/ns/pods/g85 384\nI1006 20:36:25.538893       1 logs_generator.go:76] 47 POST /api/v1/namespaces/default/pods/52f 205\nI1006 20:36:25.738856       1 logs_generator.go:76] 48 GET /api/v1/namespaces/kube-system/pods/lxx 255\nI1006 20:36:25.938849       1 logs_generator.go:76] 49 GET /api/v1/namespaces/kube-system/pods/vhh 433\nI1006 20:36:26.138835       1 logs_generator.go:76] 50 POST /api/v1/namespaces/default/pods/tktk 262\nI1006 20:36:26.338834       1 logs_generator.go:76] 51 GET /api/v1/namespaces/default/pods/tgqw 458\nI1006 20:36:26.538852       1 logs_generator.go:76] 52 POST /api/v1/namespaces/default/pods/gc68 305\nI1006 20:36:26.738828       1 logs_generator.go:76] 53 POST /api/v1/namespaces/default/pods/9p6 222\nI1006 20:36:26.938844       1 logs_generator.go:76] 54 POST /api/v1/namespaces/kube-system/pods/knj 449\nI1006 20:36:27.138862       1 logs_generator.go:76] 55 GET /api/v1/namespaces/ns/pods/c285 210\nI1006 20:36:27.338835       1 logs_generator.go:76] 56 GET /api/v1/namespaces/kube-system/pods/4pz 492\nI1006 20:36:27.538842       1 logs_generator.go:76] 57 POST /api/v1/namespaces/default/pods/dtkz 238\nI1006 20:36:27.738820       1 logs_generator.go:76] 58 PUT /api/v1/namespaces/default/pods/jnsm 431\nI1006 20:36:27.938880       1 logs_generator.go:76] 59 POST /api/v1/namespaces/kube-system/pods/c7q2 345\n"
[AfterEach] Kubectl logs
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364
Oct  6 20:36:28.135: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-3693'
Oct  6 20:36:34.325: INFO: stderr: ""
Oct  6 20:36:34.325: INFO: stdout: "pod \"logs-generator\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 20:36:34.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3693" for this suite.

• [SLOW TEST:21.956 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl logs
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1354
    should be able to retrieve and filter logs  [Conformance]
    /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","total":278,"completed":109,"skipped":1761,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 20:36:34.342: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0666 on node default medium
Oct  6 20:36:34.426: INFO: Waiting up to 5m0s for pod "pod-7033b028-3abd-4e2d-8188-aba3b038469e" in namespace "emptydir-4660" to be "success or failure"
Oct  6 20:36:34.469: INFO: Pod "pod-7033b028-3abd-4e2d-8188-aba3b038469e": Phase="Pending", Reason="", readiness=false. Elapsed: 42.836562ms
Oct  6 20:36:36.475: INFO: Pod "pod-7033b028-3abd-4e2d-8188-aba3b038469e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048816739s
Oct  6 20:36:38.482: INFO: Pod "pod-7033b028-3abd-4e2d-8188-aba3b038469e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.055895075s
STEP: Saw pod success
Oct  6 20:36:38.482: INFO: Pod "pod-7033b028-3abd-4e2d-8188-aba3b038469e" satisfied condition "success or failure"
Oct  6 20:36:38.486: INFO: Trying to get logs from node jerma-worker2 pod pod-7033b028-3abd-4e2d-8188-aba3b038469e container test-container: 
STEP: delete the pod
Oct  6 20:36:38.522: INFO: Waiting for pod pod-7033b028-3abd-4e2d-8188-aba3b038469e to disappear
Oct  6 20:36:38.551: INFO: Pod pod-7033b028-3abd-4e2d-8188-aba3b038469e no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 20:36:38.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4660" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":110,"skipped":1769,"failed":0}
SSSSS
------------------------------
[sig-network] DNS 
  should support configurable pod DNS nameservers [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 20:36:38.799: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support configurable pod DNS nameservers [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod with dnsPolicy=None and customized dnsConfig...
Oct  6 20:36:38.862: INFO: Created pod &Pod{ObjectMeta:{dns-3716  dns-3716 /api/v1/namespaces/dns-3716/pods/dns-3716 b1274251-0670-49eb-aecb-9e4a7217582a 3605721 0 2020-10-06 20:36:38 +0000 UTC   map[] map[] [] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-cghr2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-cghr2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-cghr2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
STEP: Verifying customized DNS suffix list is configured on pod...
Oct  6 20:36:42.890: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-3716 PodName:dns-3716 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Oct  6 20:36:42.890: INFO: >>> kubeConfig: /root/.kube/config
I1006 20:36:42.947476       7 log.go:172] (0x400291bad0) (0x4000fe5860) Create stream
I1006 20:36:42.947615       7 log.go:172] (0x400291bad0) (0x4000fe5860) Stream added, broadcasting: 1
I1006 20:36:42.951216       7 log.go:172] (0x400291bad0) Reply frame received for 1
I1006 20:36:42.951390       7 log.go:172] (0x400291bad0) (0x4000fe5900) Create stream
I1006 20:36:42.951479       7 log.go:172] (0x400291bad0) (0x4000fe5900) Stream added, broadcasting: 3
I1006 20:36:42.952991       7 log.go:172] (0x400291bad0) Reply frame received for 3
I1006 20:36:42.953134       7 log.go:172] (0x400291bad0) (0x4000fe5ae0) Create stream
I1006 20:36:42.953215       7 log.go:172] (0x400291bad0) (0x4000fe5ae0) Stream added, broadcasting: 5
I1006 20:36:42.954503       7 log.go:172] (0x400291bad0) Reply frame received for 5
I1006 20:36:43.081819       7 log.go:172] (0x400291bad0) Data frame received for 3
I1006 20:36:43.081984       7 log.go:172] (0x4000fe5900) (3) Data frame handling
I1006 20:36:43.082144       7 log.go:172] (0x4000fe5900) (3) Data frame sent
I1006 20:36:43.083012       7 log.go:172] (0x400291bad0) Data frame received for 5
I1006 20:36:43.083160       7 log.go:172] (0x400291bad0) Data frame received for 3
I1006 20:36:43.083383       7 log.go:172] (0x4000fe5900) (3) Data frame handling
I1006 20:36:43.083507       7 log.go:172] (0x4000fe5ae0) (5) Data frame handling
I1006 20:36:43.085610       7 log.go:172] (0x400291bad0) Data frame received for 1
I1006 20:36:43.085766       7 log.go:172] (0x4000fe5860) (1) Data frame handling
I1006 20:36:43.085926       7 log.go:172] (0x4000fe5860) (1) Data frame sent
I1006 20:36:43.086153       7 log.go:172] (0x400291bad0) (0x4000fe5860) Stream removed, broadcasting: 1
I1006 20:36:43.086370       7 log.go:172] (0x400291bad0) Go away received
I1006 20:36:43.086732       7 log.go:172] (0x400291bad0) (0x4000fe5860) Stream removed, broadcasting: 1
I1006 20:36:43.086861       7 log.go:172] (0x400291bad0) (0x4000fe5900) Stream removed, broadcasting: 3
I1006 20:36:43.086983       7 log.go:172] (0x400291bad0) (0x4000fe5ae0) Stream removed, broadcasting: 5
STEP: Verifying customized DNS server is configured on pod...
Oct  6 20:36:43.087: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-3716 PodName:dns-3716 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Oct  6 20:36:43.088: INFO: >>> kubeConfig: /root/.kube/config
I1006 20:36:43.151529       7 log.go:172] (0x4002ff8370) (0x400144e500) Create stream
I1006 20:36:43.151669       7 log.go:172] (0x4002ff8370) (0x400144e500) Stream added, broadcasting: 1
I1006 20:36:43.155287       7 log.go:172] (0x4002ff8370) Reply frame received for 1
I1006 20:36:43.155530       7 log.go:172] (0x4002ff8370) (0x40003f8640) Create stream
I1006 20:36:43.155651       7 log.go:172] (0x4002ff8370) (0x40003f8640) Stream added, broadcasting: 3
I1006 20:36:43.157771       7 log.go:172] (0x4002ff8370) Reply frame received for 3
I1006 20:36:43.157997       7 log.go:172] (0x4002ff8370) (0x400144e5a0) Create stream
I1006 20:36:43.158129       7 log.go:172] (0x4002ff8370) (0x400144e5a0) Stream added, broadcasting: 5
I1006 20:36:43.160004       7 log.go:172] (0x4002ff8370) Reply frame received for 5
I1006 20:36:43.227461       7 log.go:172] (0x4002ff8370) Data frame received for 3
I1006 20:36:43.227694       7 log.go:172] (0x40003f8640) (3) Data frame handling
I1006 20:36:43.228026       7 log.go:172] (0x40003f8640) (3) Data frame sent
I1006 20:36:43.228334       7 log.go:172] (0x4002ff8370) Data frame received for 3
I1006 20:36:43.228539       7 log.go:172] (0x40003f8640) (3) Data frame handling
I1006 20:36:43.228777       7 log.go:172] (0x4002ff8370) Data frame received for 5
I1006 20:36:43.229202       7 log.go:172] (0x400144e5a0) (5) Data frame handling
I1006 20:36:43.229595       7 log.go:172] (0x4002ff8370) Data frame received for 1
I1006 20:36:43.229733       7 log.go:172] (0x400144e500) (1) Data frame handling
I1006 20:36:43.229901       7 log.go:172] (0x400144e500) (1) Data frame sent
I1006 20:36:43.230047       7 log.go:172] (0x4002ff8370) (0x400144e500) Stream removed, broadcasting: 1
I1006 20:36:43.230230       7 log.go:172] (0x4002ff8370) Go away received
I1006 20:36:43.230660       7 log.go:172] (0x4002ff8370) (0x400144e500) Stream removed, broadcasting: 1
I1006 20:36:43.230813       7 log.go:172] (0x4002ff8370) (0x40003f8640) Stream removed, broadcasting: 3
I1006 20:36:43.230943       7 log.go:172] (0x4002ff8370) (0x400144e5a0) Stream removed, broadcasting: 5
Oct  6 20:36:43.231: INFO: Deleting pod dns-3716...
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 20:36:43.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-3716" for this suite.
•{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":278,"completed":111,"skipped":1774,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 20:36:43.295: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Oct  6 20:36:43.487: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 20:36:44.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-1326" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works  [Conformance]","total":278,"completed":112,"skipped":1779,"failed":0}
S
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 20:36:44.586: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Oct  6 20:36:44.643: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Oct  6 20:36:44.672: INFO: Waiting for terminating namespaces to be deleted...
Oct  6 20:36:44.699: INFO: 
Logging pods the kubelet thinks is on node jerma-worker before test
Oct  6 20:36:44.711: INFO: kube-proxy-knc9b from kube-system started at 2020-09-23 08:27:39 +0000 UTC (1 container statuses recorded)
Oct  6 20:36:44.711: INFO: 	Container kube-proxy ready: true, restart count 0
Oct  6 20:36:44.711: INFO: kindnet-nlsvd from kube-system started at 2020-09-23 08:27:39 +0000 UTC (1 container statuses recorded)
Oct  6 20:36:44.711: INFO: 	Container kindnet-cni ready: true, restart count 0
Oct  6 20:36:44.711: INFO: 
Logging pods the kubelet thinks is on node jerma-worker2 before test
Oct  6 20:36:44.721: INFO: kube-proxy-jgndm from kube-system started at 2020-09-23 08:27:38 +0000 UTC (1 container statuses recorded)
Oct  6 20:36:44.721: INFO: 	Container kube-proxy ready: true, restart count 0
Oct  6 20:36:44.721: INFO: kindnet-5wksn from kube-system started at 2020-09-23 08:27:38 +0000 UTC (1 container statuses recorded)
Oct  6 20:36:44.721: INFO: 	Container kindnet-cni ready: true, restart count 0
[It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-8a0ec9a8-8958-433c-8871-7e747e7855c0 90
STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled
STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled
STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides
STEP: removing the label kubernetes.io/e2e-8a0ec9a8-8958-433c-8871-7e747e7855c0 off the node jerma-worker2
STEP: verifying the node doesn't have the label kubernetes.io/e2e-8a0ec9a8-8958-433c-8871-7e747e7855c0
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 20:37:00.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-7626" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77

• [SLOW TEST:16.375 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":278,"completed":113,"skipped":1780,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 20:37:00.967: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should check if v1 is in available api versions  [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: validating api versions
Oct  6 20:37:01.061: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Oct  6 20:37:02.357: INFO: stderr: ""
Oct  6 20:37:02.357: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 20:37:02.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6784" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","total":278,"completed":114,"skipped":1828,"failed":0}
SSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run --rm job 
  should create a job from an image, then delete the job [Deprecated] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 20:37:02.375: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should create a job from an image, then delete the job [Deprecated] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: executing a command with run --rm and attach with stdin
Oct  6 20:37:02.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5545 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Oct  6 20:37:07.306: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI1006 20:37:07.176521    1967 log.go:172] (0x4000a3c210) (0x40005220a0) Create stream\nI1006 20:37:07.180529    1967 log.go:172] (0x4000a3c210) (0x40005220a0) Stream added, broadcasting: 1\nI1006 20:37:07.191075    1967 log.go:172] (0x4000a3c210) Reply frame received for 1\nI1006 20:37:07.191713    1967 log.go:172] (0x4000a3c210) (0x4000590000) Create stream\nI1006 20:37:07.191786    1967 log.go:172] (0x4000a3c210) (0x4000590000) Stream added, broadcasting: 3\nI1006 20:37:07.194018    1967 log.go:172] (0x4000a3c210) Reply frame received for 3\nI1006 20:37:07.194693    1967 log.go:172] (0x4000a3c210) (0x4000592000) Create stream\nI1006 20:37:07.194826    1967 log.go:172] (0x4000a3c210) (0x4000592000) Stream added, broadcasting: 5\nI1006 20:37:07.196692    1967 log.go:172] (0x4000a3c210) Reply frame received for 5\nI1006 20:37:07.197027    1967 log.go:172] (0x4000a3c210) (0x40005900a0) Create stream\nI1006 20:37:07.197091    1967 log.go:172] (0x4000a3c210) (0x40005900a0) Stream added, broadcasting: 7\nI1006 20:37:07.198461    1967 log.go:172] (0x4000a3c210) Reply frame received for 7\nI1006 20:37:07.201501    1967 log.go:172] (0x4000590000) (3) Writing data frame\nI1006 20:37:07.202425    1967 log.go:172] (0x4000590000) (3) Writing data frame\nI1006 20:37:07.203240    1967 log.go:172] (0x4000a3c210) Data frame received for 5\nI1006 20:37:07.203394    1967 log.go:172] (0x4000592000) (5) Data frame handling\nI1006 20:37:07.203622    1967 log.go:172] (0x4000592000) (5) Data frame sent\nI1006 20:37:07.204327    1967 log.go:172] (0x4000a3c210) Data frame received for 5\nI1006 20:37:07.204429    1967 log.go:172] (0x4000592000) (5) Data frame handling\nI1006 20:37:07.204531    1967 log.go:172] (0x4000592000) (5) Data frame sent\nI1006 20:37:07.232215    1967 log.go:172] (0x4000a3c210) Data frame received for 7\nI1006 20:37:07.232634    1967 log.go:172] (0x40005900a0) (7) Data frame handling\nI1006 20:37:07.232924    1967 log.go:172] (0x4000a3c210) Data frame received for 5\nI1006 20:37:07.233097    1967 log.go:172] (0x4000592000) (5) Data frame handling\nI1006 20:37:07.233340    1967 log.go:172] (0x4000a3c210) Data frame received for 1\nI1006 20:37:07.233617    1967 log.go:172] (0x40005220a0) (1) Data frame handling\nI1006 20:37:07.233805    1967 log.go:172] (0x40005220a0) (1) Data frame sent\nI1006 20:37:07.235929    1967 log.go:172] (0x4000a3c210) (0x40005220a0) Stream removed, broadcasting: 1\nI1006 20:37:07.236641    1967 log.go:172] (0x4000a3c210) (0x4000590000) Stream removed, broadcasting: 3\nI1006 20:37:07.237210    1967 log.go:172] (0x4000a3c210) Go away received\nI1006 20:37:07.241059    1967 log.go:172] (0x4000a3c210) (0x40005220a0) Stream removed, broadcasting: 1\nI1006 20:37:07.241711    1967 log.go:172] (0x4000a3c210) (0x4000590000) Stream removed, broadcasting: 3\nI1006 20:37:07.241827    1967 log.go:172] (0x4000a3c210) (0x4000592000) Stream removed, broadcasting: 5\nI1006 20:37:07.242454    1967 log.go:172] (0x4000a3c210) (0x40005900a0) Stream removed, broadcasting: 7\n"
Oct  6 20:37:07.307: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 20:37:09.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5545" for this suite.

• [SLOW TEST:6.956 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run --rm job
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1843
    should create a job from an image, then delete the job [Deprecated] [Conformance]
    /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Deprecated] [Conformance]","total":278,"completed":115,"skipped":1833,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 20:37:09.333: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 20:38:09.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-1" for this suite.

• [SLOW TEST:60.108 seconds]
[k8s.io] Probing container
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":278,"completed":116,"skipped":1853,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 20:38:09.446: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-223
[It] Should recreate evicted statefulset [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-223
STEP: Creating statefulset with conflicting port in namespace statefulset-223
STEP: Waiting until pod test-pod will start running in namespace statefulset-223
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-223
Oct  6 20:38:15.586: INFO: Observed stateful pod in namespace: statefulset-223, name: ss-0, uid: 44ebdae0-705d-48a5-9d04-56582b92b5bf, status phase: Pending. Waiting for statefulset controller to delete.
Oct  6 20:38:16.159: INFO: Observed stateful pod in namespace: statefulset-223, name: ss-0, uid: 44ebdae0-705d-48a5-9d04-56582b92b5bf, status phase: Failed. Waiting for statefulset controller to delete.
Oct  6 20:38:16.175: INFO: Observed stateful pod in namespace: statefulset-223, name: ss-0, uid: 44ebdae0-705d-48a5-9d04-56582b92b5bf, status phase: Failed. Waiting for statefulset controller to delete.
Oct  6 20:38:16.188: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-223
STEP: Removing pod with conflicting port in namespace statefulset-223
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-223 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Oct  6 20:38:20.266: INFO: Deleting all statefulset in ns statefulset-223
Oct  6 20:38:20.272: INFO: Scaling statefulset ss to 0
Oct  6 20:38:40.296: INFO: Waiting for statefulset status.replicas updated to 0
Oct  6 20:38:40.300: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 20:38:40.315: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-223" for this suite.

• [SLOW TEST:30.882 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    Should recreate evicted statefulset [Conformance]
    /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":278,"completed":117,"skipped":1906,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 20:38:40.329: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating secret secrets-7531/secret-test-95f2a2ca-02c3-45c2-b98f-f7d1e559c7e4
STEP: Creating a pod to test consume secrets
Oct  6 20:38:40.406: INFO: Waiting up to 5m0s for pod "pod-configmaps-501e85bb-63ba-4ca4-9e4c-cfccea87eae5" in namespace "secrets-7531" to be "success or failure"
Oct  6 20:38:40.414: INFO: Pod "pod-configmaps-501e85bb-63ba-4ca4-9e4c-cfccea87eae5": Phase="Pending", Reason="", readiness=false. Elapsed: 7.871603ms
Oct  6 20:38:42.439: INFO: Pod "pod-configmaps-501e85bb-63ba-4ca4-9e4c-cfccea87eae5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033424645s
Oct  6 20:38:44.447: INFO: Pod "pod-configmaps-501e85bb-63ba-4ca4-9e4c-cfccea87eae5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040687759s
STEP: Saw pod success
Oct  6 20:38:44.447: INFO: Pod "pod-configmaps-501e85bb-63ba-4ca4-9e4c-cfccea87eae5" satisfied condition "success or failure"
Oct  6 20:38:44.452: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-501e85bb-63ba-4ca4-9e4c-cfccea87eae5 container env-test: 
STEP: delete the pod
Oct  6 20:38:44.504: INFO: Waiting for pod pod-configmaps-501e85bb-63ba-4ca4-9e4c-cfccea87eae5 to disappear
Oct  6 20:38:44.510: INFO: Pod pod-configmaps-501e85bb-63ba-4ca4-9e4c-cfccea87eae5 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 20:38:44.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7531" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":118,"skipped":1915,"failed":0}
SSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 20:38:44.555: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Oct  6 20:38:44.609: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Oct  6 20:38:44.635: INFO: Waiting for terminating namespaces to be deleted...
Oct  6 20:38:44.639: INFO: 
Logging pods the kubelet thinks is on node jerma-worker before test
Oct  6 20:38:44.658: INFO: kube-proxy-knc9b from kube-system started at 2020-09-23 08:27:39 +0000 UTC (1 container statuses recorded)
Oct  6 20:38:44.658: INFO: 	Container kube-proxy ready: true, restart count 0
Oct  6 20:38:44.658: INFO: kindnet-nlsvd from kube-system started at 2020-09-23 08:27:39 +0000 UTC (1 container statuses recorded)
Oct  6 20:38:44.658: INFO: 	Container kindnet-cni ready: true, restart count 0
Oct  6 20:38:44.659: INFO: 
Logging pods the kubelet thinks is on node jerma-worker2 before test
Oct  6 20:38:44.676: INFO: kindnet-5wksn from kube-system started at 2020-09-23 08:27:38 +0000 UTC (1 container statuses recorded)
Oct  6 20:38:44.676: INFO: 	Container kindnet-cni ready: true, restart count 0
Oct  6 20:38:44.676: INFO: kube-proxy-jgndm from kube-system started at 2020-09-23 08:27:38 +0000 UTC (1 container statuses recorded)
Oct  6 20:38:44.676: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-bac9c579-be44-47f2-9c66-82d353445ce7 95
STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled
STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled
STEP: removing the label kubernetes.io/e2e-bac9c579-be44-47f2-9c66-82d353445ce7 off the node jerma-worker
STEP: verifying the node doesn't have the label kubernetes.io/e2e-bac9c579-be44-47f2-9c66-82d353445ce7
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 20:43:52.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-175" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77

• [SLOW TEST:308.363 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":278,"completed":119,"skipped":1921,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny custom resource creation, update and deletion [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 20:43:52.919: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Oct  6 20:43:53.982: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Oct  6 20:43:55.998: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737613833, loc:(*time.Location)(0x7271fa0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737613833, loc:(*time.Location)(0x7271fa0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737613834, loc:(*time.Location)(0x7271fa0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737613833, loc:(*time.Location)(0x7271fa0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Oct  6 20:43:58.072: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737613833, loc:(*time.Location)(0x7271fa0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737613833, loc:(*time.Location)(0x7271fa0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737613834, loc:(*time.Location)(0x7271fa0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737613833, loc:(*time.Location)(0x7271fa0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Oct  6 20:44:00.490: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737613833, loc:(*time.Location)(0x7271fa0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737613833, loc:(*time.Location)(0x7271fa0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737613834, loc:(*time.Location)(0x7271fa0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737613833, loc:(*time.Location)(0x7271fa0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Oct  6 20:44:02.463: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737613833, loc:(*time.Location)(0x7271fa0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737613833, loc:(*time.Location)(0x7271fa0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737613834, loc:(*time.Location)(0x7271fa0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737613833, loc:(*time.Location)(0x7271fa0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Oct  6 20:44:04.005: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737613833, loc:(*time.Location)(0x7271fa0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737613833, loc:(*time.Location)(0x7271fa0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737613834, loc:(*time.Location)(0x7271fa0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737613833, loc:(*time.Location)(0x7271fa0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Oct  6 20:44:06.054: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737613833, loc:(*time.Location)(0x7271fa0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737613833, loc:(*time.Location)(0x7271fa0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737613834, loc:(*time.Location)(0x7271fa0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737613833, loc:(*time.Location)(0x7271fa0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Oct  6 20:44:08.056: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737613833, loc:(*time.Location)(0x7271fa0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737613833, loc:(*time.Location)(0x7271fa0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737613834, loc:(*time.Location)(0x7271fa0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737613833, loc:(*time.Location)(0x7271fa0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Oct  6 20:44:11.069: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny custom resource creation, update and deletion [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Oct  6 20:44:11.074: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the custom resource webhook via the AdmissionRegistration API
STEP: Creating a custom resource that should be denied by the webhook
STEP: Creating a custom resource whose deletion would be denied by the webhook
STEP: Updating the custom resource with disallowed data should be denied
STEP: Deleting the custom resource should be denied
STEP: Remove the offending key and value from the custom resource data
STEP: Deleting the updated custom resource should be successful
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 20:44:12.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5749" for this suite.
STEP: Destroying namespace "webhook-5749-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:19.564 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny custom resource creation, update and deletion [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":278,"completed":120,"skipped":1938,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Docker Containers
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 20:44:12.485: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test override command
Oct  6 20:44:12.561: INFO: Waiting up to 5m0s for pod "client-containers-b57f5603-729f-4f87-bf8d-ff7edd4b6182" in namespace "containers-6377" to be "success or failure"
Oct  6 20:44:12.598: INFO: Pod "client-containers-b57f5603-729f-4f87-bf8d-ff7edd4b6182": Phase="Pending", Reason="", readiness=false. Elapsed: 35.974842ms
Oct  6 20:44:15.620: INFO: Pod "client-containers-b57f5603-729f-4f87-bf8d-ff7edd4b6182": Phase="Pending", Reason="", readiness=false. Elapsed: 3.058684882s
Oct  6 20:44:17.701: INFO: Pod "client-containers-b57f5603-729f-4f87-bf8d-ff7edd4b6182": Phase="Pending", Reason="", readiness=false. Elapsed: 5.139375496s
Oct  6 20:44:19.706: INFO: Pod "client-containers-b57f5603-729f-4f87-bf8d-ff7edd4b6182": Phase="Succeeded", Reason="", readiness=false. Elapsed: 7.144762438s
STEP: Saw pod success
Oct  6 20:44:19.706: INFO: Pod "client-containers-b57f5603-729f-4f87-bf8d-ff7edd4b6182" satisfied condition "success or failure"
Oct  6 20:44:19.710: INFO: Trying to get logs from node jerma-worker2 pod client-containers-b57f5603-729f-4f87-bf8d-ff7edd4b6182 container test-container: 
STEP: delete the pod
Oct  6 20:44:19.785: INFO: Waiting for pod client-containers-b57f5603-729f-4f87-bf8d-ff7edd4b6182 to disappear
Oct  6 20:44:19.794: INFO: Pod client-containers-b57f5603-729f-4f87-bf8d-ff7edd4b6182 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 20:44:19.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-6377" for this suite.

• [SLOW TEST:7.321 seconds]
[k8s.io] Docker Containers
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":278,"completed":121,"skipped":1976,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 20:44:19.808: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ReplicaSet
STEP: Ensuring resource quota status captures replicaset creation
STEP: Deleting a ReplicaSet
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 20:44:30.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-9265" for this suite.

• [SLOW TEST:11.181 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":278,"completed":122,"skipped":2021,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 20:44:30.992: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 20:44:37.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-6611" for this suite.
STEP: Destroying namespace "nsdeletetest-1158" for this suite.
Oct  6 20:44:37.424: INFO: Namespace nsdeletetest-1158 was already deleted
STEP: Destroying namespace "nsdeletetest-789" for this suite.

• [SLOW TEST:6.436 seconds]
[sig-api-machinery] Namespaces [Serial]
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":278,"completed":123,"skipped":2036,"failed":0}
SS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert a non homogeneous list of CRs [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 20:44:37.428: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125
STEP: Setting up server cert
STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication
STEP: Deploying the custom resource conversion webhook pod
STEP: Wait for the deployment to be ready
Oct  6 20:44:39.333: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set
Oct  6 20:44:42.274: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737613879, loc:(*time.Location)(0x7271fa0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737613879, loc:(*time.Location)(0x7271fa0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737613879, loc:(*time.Location)(0x7271fa0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737613879, loc:(*time.Location)(0x7271fa0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Oct  6 20:44:44.281: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737613879, loc:(*time.Location)(0x7271fa0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737613879, loc:(*time.Location)(0x7271fa0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737613879, loc:(*time.Location)(0x7271fa0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737613879, loc:(*time.Location)(0x7271fa0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Oct  6 20:44:47.304: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1
[It] should be able to convert a non homogeneous list of CRs [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Oct  6 20:44:47.310: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating a v1 custom resource
STEP: Create a v2 custom resource
STEP: List CRs in v1
STEP: List CRs in v2
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 20:44:48.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-3598" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136

• [SLOW TEST:11.455 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert a non homogeneous list of CRs [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":278,"completed":124,"skipped":2038,"failed":0}
SSSSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 20:44:48.884: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name s-test-opt-del-f4ddc22e-f2dd-416a-925d-1a16cb308c29
STEP: Creating secret with name s-test-opt-upd-baaaed17-c83c-439a-ad5b-6a912bd13068
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-f4ddc22e-f2dd-416a-925d-1a16cb308c29
STEP: Updating secret s-test-opt-upd-baaaed17-c83c-439a-ad5b-6a912bd13068
STEP: Creating secret with name s-test-opt-create-7c912231-050a-471f-8b6b-2f11d857a59f
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 20:44:59.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5610" for this suite.

• [SLOW TEST:10.543 seconds]
[sig-storage] Secrets
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":125,"skipped":2043,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 20:44:59.431: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-444.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-444.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Oct  6 20:45:17.629: INFO: DNS probes using dns-444/dns-test-43b6d9b8-964f-4fed-bfe5-2f638512223d succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 20:45:17.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-444" for this suite.

• [SLOW TEST:18.303 seconds]
[sig-network] DNS
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for the cluster  [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster  [Conformance]","total":278,"completed":126,"skipped":2077,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 20:45:17.737: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
Oct  6 20:45:17.945: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 20:45:35.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-4290" for this suite.

• [SLOW TEST:18.146 seconds]
[k8s.io] InitContainer [NodeConformance]
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should invoke init containers on a RestartAlways pod [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":278,"completed":127,"skipped":2113,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a mutating webhook should work [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 20:45:35.885: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Oct  6 20:45:40.270: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Oct  6 20:45:43.067: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737613940, loc:(*time.Location)(0x7271fa0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737613940, loc:(*time.Location)(0x7271fa0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737613940, loc:(*time.Location)(0x7271fa0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737613940, loc:(*time.Location)(0x7271fa0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Oct  6 20:45:45.280: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737613940, loc:(*time.Location)(0x7271fa0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737613940, loc:(*time.Location)(0x7271fa0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737613940, loc:(*time.Location)(0x7271fa0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737613940, loc:(*time.Location)(0x7271fa0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Oct  6 20:45:47.188: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737613940, loc:(*time.Location)(0x7271fa0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737613940, loc:(*time.Location)(0x7271fa0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737613940, loc:(*time.Location)(0x7271fa0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737613940, loc:(*time.Location)(0x7271fa0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Oct  6 20:45:49.329: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737613940, loc:(*time.Location)(0x7271fa0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737613940, loc:(*time.Location)(0x7271fa0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737613940, loc:(*time.Location)(0x7271fa0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737613940, loc:(*time.Location)(0x7271fa0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Oct  6 20:45:51.073: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737613940, loc:(*time.Location)(0x7271fa0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737613940, loc:(*time.Location)(0x7271fa0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737613940, loc:(*time.Location)(0x7271fa0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737613940, loc:(*time.Location)(0x7271fa0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Oct  6 20:45:54.144: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] patching/updating a mutating webhook should work [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a mutating webhook configuration
STEP: Updating a mutating webhook configuration's rules to not include the create operation
STEP: Creating a configMap that should not be mutated
STEP: Patching a mutating webhook configuration's rules to include the create operation
STEP: Creating a configMap that should be mutated
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 20:45:54.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4240" for this suite.
STEP: Destroying namespace "webhook-4240-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:18.582 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a mutating webhook should work [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":278,"completed":128,"skipped":2122,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch 
  watch on custom resource definition objects [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 20:45:54.470: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] watch on custom resource definition objects [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Oct  6 20:45:54.528: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating first CR 
Oct  6 20:45:56.046: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-10-06T20:45:55Z generation:1 name:name1 resourceVersion:3608138 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:fc766ed5-6c82-4475-8571-c1a20448e97d] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Creating second CR
Oct  6 20:46:06.056: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-10-06T20:46:06Z generation:1 name:name2 resourceVersion:3608182 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:7bbd7759-11f6-41b4-80b6-81a6e14dd178] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Modifying first CR
Oct  6 20:46:16.064: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-10-06T20:45:55Z generation:2 name:name1 resourceVersion:3608212 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:fc766ed5-6c82-4475-8571-c1a20448e97d] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Modifying second CR
Oct  6 20:46:26.072: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-10-06T20:46:06Z generation:2 name:name2 resourceVersion:3608242 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:7bbd7759-11f6-41b4-80b6-81a6e14dd178] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Deleting first CR
Oct  6 20:46:36.093: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-10-06T20:45:55Z generation:2 name:name1 resourceVersion:3608272 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:fc766ed5-6c82-4475-8571-c1a20448e97d] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Deleting second CR
Oct  6 20:46:46.111: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-10-06T20:46:06Z generation:2 name:name2 resourceVersion:3608300 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:7bbd7759-11f6-41b4-80b6-81a6e14dd178] num:map[num1:9223372036854775807 num2:1000000]]}
[AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 20:46:56.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-watch-1386" for this suite.

• [SLOW TEST:62.168 seconds]
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  CustomResourceDefinition Watch
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:41
    watch on custom resource definition objects [Conformance]
    /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":278,"completed":129,"skipped":2141,"failed":0}
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 20:46:56.639: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0644 on tmpfs
Oct  6 20:46:56.773: INFO: Waiting up to 5m0s for pod "pod-effa336d-7d84-4f6a-88a8-e97fe91545d6" in namespace "emptydir-4267" to be "success or failure"
Oct  6 20:46:56.792: INFO: Pod "pod-effa336d-7d84-4f6a-88a8-e97fe91545d6": Phase="Pending", Reason="", readiness=false. Elapsed: 18.840288ms
Oct  6 20:46:59.103: INFO: Pod "pod-effa336d-7d84-4f6a-88a8-e97fe91545d6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.329717691s
Oct  6 20:47:01.108: INFO: Pod "pod-effa336d-7d84-4f6a-88a8-e97fe91545d6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.335016407s
Oct  6 20:47:03.113: INFO: Pod "pod-effa336d-7d84-4f6a-88a8-e97fe91545d6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.340514198s
Oct  6 20:47:06.238: INFO: Pod "pod-effa336d-7d84-4f6a-88a8-e97fe91545d6": Phase="Pending", Reason="", readiness=false. Elapsed: 9.465405604s
Oct  6 20:47:08.601: INFO: Pod "pod-effa336d-7d84-4f6a-88a8-e97fe91545d6": Phase="Running", Reason="", readiness=true. Elapsed: 11.828352087s
Oct  6 20:47:10.609: INFO: Pod "pod-effa336d-7d84-4f6a-88a8-e97fe91545d6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.835851551s
STEP: Saw pod success
Oct  6 20:47:10.609: INFO: Pod "pod-effa336d-7d84-4f6a-88a8-e97fe91545d6" satisfied condition "success or failure"
Oct  6 20:47:10.619: INFO: Trying to get logs from node jerma-worker2 pod pod-effa336d-7d84-4f6a-88a8-e97fe91545d6 container test-container: 
STEP: delete the pod
Oct  6 20:47:10.657: INFO: Waiting for pod pod-effa336d-7d84-4f6a-88a8-e97fe91545d6 to disappear
Oct  6 20:47:10.661: INFO: Pod pod-effa336d-7d84-4f6a-88a8-e97fe91545d6 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 20:47:10.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4267" for this suite.

• [SLOW TEST:14.035 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":130,"skipped":2144,"failed":0}
S
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 20:47:10.674: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test substitution in container's args
Oct  6 20:47:10.815: INFO: Waiting up to 5m0s for pod "var-expansion-86a974c9-ae12-46f1-9757-c3474a79a531" in namespace "var-expansion-5821" to be "success or failure"
Oct  6 20:47:10.883: INFO: Pod "var-expansion-86a974c9-ae12-46f1-9757-c3474a79a531": Phase="Pending", Reason="", readiness=false. Elapsed: 67.714972ms
Oct  6 20:47:13.057: INFO: Pod "var-expansion-86a974c9-ae12-46f1-9757-c3474a79a531": Phase="Pending", Reason="", readiness=false. Elapsed: 2.241060843s
Oct  6 20:47:15.063: INFO: Pod "var-expansion-86a974c9-ae12-46f1-9757-c3474a79a531": Phase="Pending", Reason="", readiness=false. Elapsed: 4.247490776s
Oct  6 20:47:17.278: INFO: Pod "var-expansion-86a974c9-ae12-46f1-9757-c3474a79a531": Phase="Pending", Reason="", readiness=false. Elapsed: 6.462114894s
Oct  6 20:47:19.303: INFO: Pod "var-expansion-86a974c9-ae12-46f1-9757-c3474a79a531": Phase="Running", Reason="", readiness=true. Elapsed: 8.486979161s
Oct  6 20:47:21.310: INFO: Pod "var-expansion-86a974c9-ae12-46f1-9757-c3474a79a531": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.494233061s
STEP: Saw pod success
Oct  6 20:47:21.310: INFO: Pod "var-expansion-86a974c9-ae12-46f1-9757-c3474a79a531" satisfied condition "success or failure"
Oct  6 20:47:21.315: INFO: Trying to get logs from node jerma-worker2 pod var-expansion-86a974c9-ae12-46f1-9757-c3474a79a531 container dapi-container: 
STEP: delete the pod
Oct  6 20:47:21.350: INFO: Waiting for pod var-expansion-86a974c9-ae12-46f1-9757-c3474a79a531 to disappear
Oct  6 20:47:21.362: INFO: Pod var-expansion-86a974c9-ae12-46f1-9757-c3474a79a531 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 20:47:21.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-5821" for this suite.

• [SLOW TEST:10.702 seconds]
[k8s.io] Variable Expansion
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":278,"completed":131,"skipped":2145,"failed":0}
SSSSSS
------------------------------
[sig-network] Services 
  should be able to create a functioning NodePort service [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 20:47:21.377: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to create a functioning NodePort service [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating service nodeport-test with type=NodePort in namespace services-806
STEP: creating replication controller nodeport-test in namespace services-806
I1006 20:47:21.633723       7 runners.go:189] Created replication controller with name: nodeport-test, namespace: services-806, replica count: 2
I1006 20:47:24.685282       7 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1006 20:47:27.686091       7 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1006 20:47:30.686879       7 runners.go:189] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Oct  6 20:47:30.687: INFO: Creating new exec pod
Oct  6 20:47:37.722: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-806 execpodcjrlp -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80'
Oct  6 20:47:41.878: INFO: stderr: "I1006 20:47:41.704986    1995 log.go:172] (0x400053e6e0) (0x40007d1c20) Create stream\nI1006 20:47:41.710412    1995 log.go:172] (0x400053e6e0) (0x40007d1c20) Stream added, broadcasting: 1\nI1006 20:47:41.725693    1995 log.go:172] (0x400053e6e0) Reply frame received for 1\nI1006 20:47:41.726408    1995 log.go:172] (0x400053e6e0) (0x40008f00a0) Create stream\nI1006 20:47:41.726479    1995 log.go:172] (0x400053e6e0) (0x40008f00a0) Stream added, broadcasting: 3\nI1006 20:47:41.728107    1995 log.go:172] (0x400053e6e0) Reply frame received for 3\nI1006 20:47:41.728631    1995 log.go:172] (0x400053e6e0) (0x400041a000) Create stream\nI1006 20:47:41.728744    1995 log.go:172] (0x400053e6e0) (0x400041a000) Stream added, broadcasting: 5\nI1006 20:47:41.730576    1995 log.go:172] (0x400053e6e0) Reply frame received for 5\nI1006 20:47:41.842309    1995 log.go:172] (0x400053e6e0) Data frame received for 5\nI1006 20:47:41.842649    1995 log.go:172] (0x400041a000) (5) Data frame handling\nI1006 20:47:41.843436    1995 log.go:172] (0x400041a000) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI1006 20:47:41.862291    1995 log.go:172] (0x400053e6e0) Data frame received for 5\nI1006 20:47:41.862485    1995 log.go:172] (0x400041a000) (5) Data frame handling\nI1006 20:47:41.862582    1995 log.go:172] (0x400041a000) (5) Data frame sent\nI1006 20:47:41.862668    1995 log.go:172] (0x400053e6e0) Data frame received for 5\nI1006 20:47:41.862738    1995 log.go:172] (0x400041a000) (5) Data frame handling\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI1006 20:47:41.863310    1995 log.go:172] (0x400053e6e0) Data frame received for 3\nI1006 20:47:41.863539    1995 log.go:172] (0x40008f00a0) (3) Data frame handling\nI1006 20:47:41.864599    1995 log.go:172] (0x400053e6e0) Data frame received for 1\nI1006 20:47:41.864689    1995 log.go:172] (0x40007d1c20) (1) Data frame handling\nI1006 20:47:41.864821    1995 log.go:172] (0x40007d1c20) (1) Data frame sent\nI1006 20:47:41.865581    1995 log.go:172] (0x400053e6e0) (0x40007d1c20) Stream removed, broadcasting: 1\nI1006 20:47:41.868069    1995 log.go:172] (0x400053e6e0) Go away received\nI1006 20:47:41.871479    1995 log.go:172] (0x400053e6e0) (0x40007d1c20) Stream removed, broadcasting: 1\nI1006 20:47:41.871914    1995 log.go:172] (0x400053e6e0) (0x40008f00a0) Stream removed, broadcasting: 3\nI1006 20:47:41.872116    1995 log.go:172] (0x400053e6e0) (0x400041a000) Stream removed, broadcasting: 5\n"
Oct  6 20:47:41.879: INFO: stdout: ""
Oct  6 20:47:41.884: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-806 execpodcjrlp -- /bin/sh -x -c nc -zv -t -w 2 10.98.247.48 80'
Oct  6 20:47:43.321: INFO: stderr: "I1006 20:47:43.226430    2028 log.go:172] (0x4000538000) (0x4000798000) Create stream\nI1006 20:47:43.230192    2028 log.go:172] (0x4000538000) (0x4000798000) Stream added, broadcasting: 1\nI1006 20:47:43.241030    2028 log.go:172] (0x4000538000) Reply frame received for 1\nI1006 20:47:43.241724    2028 log.go:172] (0x4000538000) (0x40007980a0) Create stream\nI1006 20:47:43.241786    2028 log.go:172] (0x4000538000) (0x40007980a0) Stream added, broadcasting: 3\nI1006 20:47:43.243022    2028 log.go:172] (0x4000538000) Reply frame received for 3\nI1006 20:47:43.243369    2028 log.go:172] (0x4000538000) (0x40007e8000) Create stream\nI1006 20:47:43.243436    2028 log.go:172] (0x4000538000) (0x40007e8000) Stream added, broadcasting: 5\nI1006 20:47:43.244551    2028 log.go:172] (0x4000538000) Reply frame received for 5\nI1006 20:47:43.306748    2028 log.go:172] (0x4000538000) Data frame received for 3\nI1006 20:47:43.307085    2028 log.go:172] (0x4000538000) Data frame received for 5\nI1006 20:47:43.307182    2028 log.go:172] (0x40007e8000) (5) Data frame handling\nI1006 20:47:43.307330    2028 log.go:172] (0x40007980a0) (3) Data frame handling\nI1006 20:47:43.307443    2028 log.go:172] (0x4000538000) Data frame received for 1\nI1006 20:47:43.307516    2028 log.go:172] (0x4000798000) (1) Data frame handling\nI1006 20:47:43.308259    2028 log.go:172] (0x40007e8000) (5) Data frame sent\nI1006 20:47:43.308700    2028 log.go:172] (0x4000538000) Data frame received for 5\nI1006 20:47:43.308805    2028 log.go:172] (0x40007e8000) (5) Data frame handling\n+ nc -zv -t -w 2 10.98.247.48 80\nConnection to 10.98.247.48 80 port [tcp/http] succeeded!\nI1006 20:47:43.309486    2028 log.go:172] (0x4000798000) (1) Data frame sent\nI1006 20:47:43.310726    2028 log.go:172] (0x4000538000) (0x4000798000) Stream removed, broadcasting: 1\nI1006 20:47:43.313392    2028 log.go:172] (0x4000538000) Go away received\nI1006 20:47:43.316119    2028 log.go:172] (0x4000538000) (0x4000798000) Stream removed, broadcasting: 1\nI1006 20:47:43.316291    2028 log.go:172] (0x4000538000) (0x40007980a0) Stream removed, broadcasting: 3\nI1006 20:47:43.316415    2028 log.go:172] (0x4000538000) (0x40007e8000) Stream removed, broadcasting: 5\n"
Oct  6 20:47:43.322: INFO: stdout: ""
Oct  6 20:47:43.324: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-806 execpodcjrlp -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.9 32393'
Oct  6 20:47:44.740: INFO: stderr: "I1006 20:47:44.644657    2051 log.go:172] (0x4000a48d10) (0x400075c140) Create stream\nI1006 20:47:44.648468    2051 log.go:172] (0x4000a48d10) (0x400075c140) Stream added, broadcasting: 1\nI1006 20:47:44.662200    2051 log.go:172] (0x4000a48d10) Reply frame received for 1\nI1006 20:47:44.662941    2051 log.go:172] (0x4000a48d10) (0x40007bc000) Create stream\nI1006 20:47:44.663009    2051 log.go:172] (0x4000a48d10) (0x40007bc000) Stream added, broadcasting: 3\nI1006 20:47:44.664780    2051 log.go:172] (0x4000a48d10) Reply frame received for 3\nI1006 20:47:44.665376    2051 log.go:172] (0x4000a48d10) (0x400075c1e0) Create stream\nI1006 20:47:44.665500    2051 log.go:172] (0x4000a48d10) (0x400075c1e0) Stream added, broadcasting: 5\nI1006 20:47:44.667299    2051 log.go:172] (0x4000a48d10) Reply frame received for 5\nI1006 20:47:44.720751    2051 log.go:172] (0x4000a48d10) Data frame received for 3\nI1006 20:47:44.721154    2051 log.go:172] (0x4000a48d10) Data frame received for 5\nI1006 20:47:44.721493    2051 log.go:172] (0x4000a48d10) Data frame received for 1\nI1006 20:47:44.721651    2051 log.go:172] (0x400075c140) (1) Data frame handling\nI1006 20:47:44.721920    2051 log.go:172] (0x400075c1e0) (5) Data frame handling\nI1006 20:47:44.722219    2051 log.go:172] (0x40007bc000) (3) Data frame handling\nI1006 20:47:44.723254    2051 log.go:172] (0x400075c140) (1) Data frame sent\nI1006 20:47:44.723581    2051 log.go:172] (0x400075c1e0) (5) Data frame sent\n+ nc -zv -t -w 2 172.18.0.9 32393\nI1006 20:47:44.724021    2051 log.go:172] (0x4000a48d10) Data frame received for 5\nI1006 20:47:44.724111    2051 log.go:172] (0x400075c1e0) (5) Data frame handling\nI1006 20:47:44.725438    2051 log.go:172] (0x4000a48d10) (0x400075c140) Stream removed, broadcasting: 1\nConnection to 172.18.0.9 32393 port [tcp/32393] succeeded!\nI1006 20:47:44.727607    2051 log.go:172] (0x400075c1e0) (5) Data frame sent\nI1006 20:47:44.727793    2051 log.go:172] (0x4000a48d10) Data frame received for 5\nI1006 20:47:44.728807    2051 log.go:172] (0x400075c1e0) (5) Data frame handling\nI1006 20:47:44.729443    2051 log.go:172] (0x4000a48d10) Go away received\nI1006 20:47:44.733168    2051 log.go:172] (0x4000a48d10) (0x400075c140) Stream removed, broadcasting: 1\nI1006 20:47:44.733483    2051 log.go:172] (0x4000a48d10) (0x40007bc000) Stream removed, broadcasting: 3\nI1006 20:47:44.733727    2051 log.go:172] (0x4000a48d10) (0x400075c1e0) Stream removed, broadcasting: 5\n"
Oct  6 20:47:44.741: INFO: stdout: ""
Oct  6 20:47:44.742: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-806 execpodcjrlp -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.10 32393'
Oct  6 20:47:46.214: INFO: stderr: "I1006 20:47:46.101890    2074 log.go:172] (0x4000ada000) (0x4000aca000) Create stream\nI1006 20:47:46.105348    2074 log.go:172] (0x4000ada000) (0x4000aca000) Stream added, broadcasting: 1\nI1006 20:47:46.119237    2074 log.go:172] (0x4000ada000) Reply frame received for 1\nI1006 20:47:46.119790    2074 log.go:172] (0x4000ada000) (0x4000833ae0) Create stream\nI1006 20:47:46.119861    2074 log.go:172] (0x4000ada000) (0x4000833ae0) Stream added, broadcasting: 3\nI1006 20:47:46.121207    2074 log.go:172] (0x4000ada000) Reply frame received for 3\nI1006 20:47:46.121465    2074 log.go:172] (0x4000ada000) (0x4000a6c000) Create stream\nI1006 20:47:46.121527    2074 log.go:172] (0x4000ada000) (0x4000a6c000) Stream added, broadcasting: 5\nI1006 20:47:46.122714    2074 log.go:172] (0x4000ada000) Reply frame received for 5\nI1006 20:47:46.193920    2074 log.go:172] (0x4000ada000) Data frame received for 3\nI1006 20:47:46.194277    2074 log.go:172] (0x4000ada000) Data frame received for 5\nI1006 20:47:46.194449    2074 log.go:172] (0x4000833ae0) (3) Data frame handling\nI1006 20:47:46.194748    2074 log.go:172] (0x4000a6c000) (5) Data frame handling\nI1006 20:47:46.195151    2074 log.go:172] (0x4000ada000) Data frame received for 1\nI1006 20:47:46.195294    2074 log.go:172] (0x4000aca000) (1) Data frame handling\nI1006 20:47:46.197821    2074 log.go:172] (0x4000a6c000) (5) Data frame sent\nI1006 20:47:46.198040    2074 log.go:172] (0x4000aca000) (1) Data frame sent\n+ nc -zv -t -w 2 172.18.0.10 32393\nConnection to 172.18.0.10 32393 port [tcp/32393] succeeded!\nI1006 20:47:46.198279    2074 log.go:172] (0x4000ada000) Data frame received for 5\nI1006 20:47:46.198938    2074 log.go:172] (0x4000ada000) (0x4000aca000) Stream removed, broadcasting: 1\nI1006 20:47:46.200809    2074 log.go:172] (0x4000a6c000) (5) Data frame handling\nI1006 20:47:46.201873    2074 log.go:172] (0x4000ada000) Go away received\nI1006 20:47:46.205588    2074 log.go:172] (0x4000ada000) (0x4000aca000) Stream removed, broadcasting: 1\nI1006 20:47:46.205967    2074 log.go:172] (0x4000ada000) (0x4000833ae0) Stream removed, broadcasting: 3\nI1006 20:47:46.206217    2074 log.go:172] (0x4000ada000) (0x4000a6c000) Stream removed, broadcasting: 5\n"
Oct  6 20:47:46.215: INFO: stdout: ""
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 20:47:46.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-806" for this suite.
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:24.856 seconds]
[sig-network] Services
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to create a functioning NodePort service [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":278,"completed":132,"skipped":2151,"failed":0}
SSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 20:47:46.234: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Kubectl run default
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1490
[It] should create an rc or deployment from an image  [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Oct  6 20:47:46.317: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-5331'
Oct  6 20:47:47.625: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Oct  6 20:47:47.625: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n"
STEP: verifying the pod controlled by e2e-test-httpd-deployment gets created
[AfterEach] Kubectl run default
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1496
Oct  6 20:47:47.639: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-5331'
Oct  6 20:47:48.845: INFO: stderr: ""
Oct  6 20:47:48.845: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 20:47:48.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5331" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image  [Conformance]","total":278,"completed":133,"skipped":2157,"failed":0}
S
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 20:47:48.878: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should serve multiport endpoints from pods  [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating service multi-endpoint-test in namespace services-8885
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8885 to expose endpoints map[]
Oct  6 20:47:49.070: INFO: successfully validated that service multi-endpoint-test in namespace services-8885 exposes endpoints map[] (67.309567ms elapsed)
STEP: Creating pod pod1 in namespace services-8885
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8885 to expose endpoints map[pod1:[100]]
Oct  6 20:47:56.067: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (6.984064115s elapsed, will retry)
Oct  6 20:48:05.236: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (16.152771804s elapsed, will retry)
Oct  6 20:48:06.246: INFO: successfully validated that service multi-endpoint-test in namespace services-8885 exposes endpoints map[pod1:[100]] (17.16346025s elapsed)
STEP: Creating pod pod2 in namespace services-8885
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8885 to expose endpoints map[pod1:[100] pod2:[101]]
Oct  6 20:48:10.556: INFO: successfully validated that service multi-endpoint-test in namespace services-8885 exposes endpoints map[pod1:[100] pod2:[101]] (4.303549772s elapsed)
STEP: Deleting pod pod1 in namespace services-8885
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8885 to expose endpoints map[pod2:[101]]
Oct  6 20:48:10.578: INFO: successfully validated that service multi-endpoint-test in namespace services-8885 exposes endpoints map[pod2:[101]] (14.920716ms elapsed)
STEP: Deleting pod pod2 in namespace services-8885
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8885 to expose endpoints map[]
Oct  6 20:48:10.605: INFO: successfully validated that service multi-endpoint-test in namespace services-8885 exposes endpoints map[] (21.557036ms elapsed)
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 20:48:10.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-8885" for this suite.
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:21.781 seconds]
[sig-network] Services
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods  [Conformance]","total":278,"completed":134,"skipped":2158,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 20:48:10.661: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Oct  6 20:48:11.358: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0b626640-fe88-443b-9561-b91b2ee388a4" in namespace "downward-api-8419" to be "success or failure"
Oct  6 20:48:11.518: INFO: Pod "downwardapi-volume-0b626640-fe88-443b-9561-b91b2ee388a4": Phase="Pending", Reason="", readiness=false. Elapsed: 159.838206ms
Oct  6 20:48:13.523: INFO: Pod "downwardapi-volume-0b626640-fe88-443b-9561-b91b2ee388a4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.165221354s
Oct  6 20:48:15.541: INFO: Pod "downwardapi-volume-0b626640-fe88-443b-9561-b91b2ee388a4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.182884101s
Oct  6 20:48:17.546: INFO: Pod "downwardapi-volume-0b626640-fe88-443b-9561-b91b2ee388a4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.187736797s
STEP: Saw pod success
Oct  6 20:48:17.546: INFO: Pod "downwardapi-volume-0b626640-fe88-443b-9561-b91b2ee388a4" satisfied condition "success or failure"
Oct  6 20:48:17.550: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-0b626640-fe88-443b-9561-b91b2ee388a4 container client-container: 
STEP: delete the pod
Oct  6 20:48:17.583: INFO: Waiting for pod downwardapi-volume-0b626640-fe88-443b-9561-b91b2ee388a4 to disappear
Oct  6 20:48:17.606: INFO: Pod downwardapi-volume-0b626640-fe88-443b-9561-b91b2ee388a4 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 20:48:17.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8419" for this suite.

• [SLOW TEST:6.956 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":135,"skipped":2191,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Subpath
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 20:48:17.618: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod pod-subpath-test-projected-7qq9
STEP: Creating a pod to test atomic-volume-subpath
Oct  6 20:48:17.732: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-7qq9" in namespace "subpath-7529" to be "success or failure"
Oct  6 20:48:17.786: INFO: Pod "pod-subpath-test-projected-7qq9": Phase="Pending", Reason="", readiness=false. Elapsed: 54.139138ms
Oct  6 20:48:19.795: INFO: Pod "pod-subpath-test-projected-7qq9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062576682s
Oct  6 20:48:21.852: INFO: Pod "pod-subpath-test-projected-7qq9": Phase="Running", Reason="", readiness=true. Elapsed: 4.120309401s
Oct  6 20:48:23.857: INFO: Pod "pod-subpath-test-projected-7qq9": Phase="Running", Reason="", readiness=true. Elapsed: 6.124973551s
Oct  6 20:48:25.863: INFO: Pod "pod-subpath-test-projected-7qq9": Phase="Running", Reason="", readiness=true. Elapsed: 8.131272565s
Oct  6 20:48:27.869: INFO: Pod "pod-subpath-test-projected-7qq9": Phase="Running", Reason="", readiness=true. Elapsed: 10.137054092s
Oct  6 20:48:29.874: INFO: Pod "pod-subpath-test-projected-7qq9": Phase="Running", Reason="", readiness=true. Elapsed: 12.142323135s
Oct  6 20:48:31.881: INFO: Pod "pod-subpath-test-projected-7qq9": Phase="Running", Reason="", readiness=true. Elapsed: 14.148474118s
Oct  6 20:48:33.894: INFO: Pod "pod-subpath-test-projected-7qq9": Phase="Running", Reason="", readiness=true. Elapsed: 16.161932759s
Oct  6 20:48:35.900: INFO: Pod "pod-subpath-test-projected-7qq9": Phase="Running", Reason="", readiness=true. Elapsed: 18.167880541s
Oct  6 20:48:37.906: INFO: Pod "pod-subpath-test-projected-7qq9": Phase="Running", Reason="", readiness=true. Elapsed: 20.17353726s
Oct  6 20:48:39.911: INFO: Pod "pod-subpath-test-projected-7qq9": Phase="Running", Reason="", readiness=true. Elapsed: 22.178971181s
Oct  6 20:48:41.916: INFO: Pod "pod-subpath-test-projected-7qq9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.183961517s
STEP: Saw pod success
Oct  6 20:48:41.916: INFO: Pod "pod-subpath-test-projected-7qq9" satisfied condition "success or failure"
Oct  6 20:48:41.921: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-projected-7qq9 container test-container-subpath-projected-7qq9: 
STEP: delete the pod
Oct  6 20:48:41.967: INFO: Waiting for pod pod-subpath-test-projected-7qq9 to disappear
Oct  6 20:48:41.970: INFO: Pod pod-subpath-test-projected-7qq9 no longer exists
STEP: Deleting pod pod-subpath-test-projected-7qq9
Oct  6 20:48:41.970: INFO: Deleting pod "pod-subpath-test-projected-7qq9" in namespace "subpath-7529"
[AfterEach] [sig-storage] Subpath
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 20:48:41.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-7529" for this suite.

• [SLOW TEST:24.363 seconds]
[sig-storage] Subpath
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with projected pod [LinuxOnly] [Conformance]
    /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":278,"completed":136,"skipped":2203,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 20:48:41.983: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test substitution in container's command
Oct  6 20:48:42.055: INFO: Waiting up to 5m0s for pod "var-expansion-2ad40154-6c9e-498e-b6d8-16e5eb70372e" in namespace "var-expansion-9780" to be "success or failure"
Oct  6 20:48:42.060: INFO: Pod "var-expansion-2ad40154-6c9e-498e-b6d8-16e5eb70372e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.51512ms
Oct  6 20:48:44.065: INFO: Pod "var-expansion-2ad40154-6c9e-498e-b6d8-16e5eb70372e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009289926s
Oct  6 20:48:46.070: INFO: Pod "var-expansion-2ad40154-6c9e-498e-b6d8-16e5eb70372e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014701631s
STEP: Saw pod success
Oct  6 20:48:46.071: INFO: Pod "var-expansion-2ad40154-6c9e-498e-b6d8-16e5eb70372e" satisfied condition "success or failure"
Oct  6 20:48:46.075: INFO: Trying to get logs from node jerma-worker pod var-expansion-2ad40154-6c9e-498e-b6d8-16e5eb70372e container dapi-container: 
STEP: delete the pod
Oct  6 20:48:46.110: INFO: Waiting for pod var-expansion-2ad40154-6c9e-498e-b6d8-16e5eb70372e to disappear
Oct  6 20:48:46.114: INFO: Pod var-expansion-2ad40154-6c9e-498e-b6d8-16e5eb70372e no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 20:48:46.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-9780" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":278,"completed":137,"skipped":2222,"failed":0}
SSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 20:48:46.124: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Oct  6 20:49:08.261: INFO: Container started at 2020-10-06 20:48:48 +0000 UTC, pod became ready at 2020-10-06 20:49:06 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 20:49:08.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-7647" for this suite.

• [SLOW TEST:22.152 seconds]
[k8s.io] Probing container
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":278,"completed":138,"skipped":2233,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 20:49:08.279: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] deployment should support rollover [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Oct  6 20:49:08.358: INFO: Pod name rollover-pod: Found 0 pods out of 1
Oct  6 20:49:13.420: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Oct  6 20:49:15.432: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Oct  6 20:49:17.439: INFO: Creating deployment "test-rollover-deployment"
Oct  6 20:49:17.470: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Oct  6 20:49:19.481: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Oct  6 20:49:19.493: INFO: Ensure that both replica sets have 1 created replica
Oct  6 20:49:19.504: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Oct  6 20:49:19.513: INFO: Updating deployment test-rollover-deployment
Oct  6 20:49:19.513: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Oct  6 20:49:21.572: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Oct  6 20:49:21.585: INFO: Make sure deployment "test-rollover-deployment" is complete
Oct  6 20:49:21.595: INFO: all replica sets need to contain the pod-template-hash label
Oct  6 20:49:21.596: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737614157, loc:(*time.Location)(0x7271fa0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737614157, loc:(*time.Location)(0x7271fa0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737614159, loc:(*time.Location)(0x7271fa0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737614157, loc:(*time.Location)(0x7271fa0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Oct  6 20:49:23.610: INFO: all replica sets need to contain the pod-template-hash label
Oct  6 20:49:23.610: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737614157, loc:(*time.Location)(0x7271fa0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737614157, loc:(*time.Location)(0x7271fa0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737614163, loc:(*time.Location)(0x7271fa0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737614157, loc:(*time.Location)(0x7271fa0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Oct  6 20:49:25.611: INFO: all replica sets need to contain the pod-template-hash label
Oct  6 20:49:25.611: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737614157, loc:(*time.Location)(0x7271fa0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737614157, loc:(*time.Location)(0x7271fa0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737614163, loc:(*time.Location)(0x7271fa0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737614157, loc:(*time.Location)(0x7271fa0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Oct  6 20:49:27.611: INFO: all replica sets need to contain the pod-template-hash label
Oct  6 20:49:27.612: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737614157, loc:(*time.Location)(0x7271fa0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737614157, loc:(*time.Location)(0x7271fa0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737614163, loc:(*time.Location)(0x7271fa0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737614157, loc:(*time.Location)(0x7271fa0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Oct  6 20:49:29.611: INFO: all replica sets need to contain the pod-template-hash label
Oct  6 20:49:29.611: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737614157, loc:(*time.Location)(0x7271fa0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737614157, loc:(*time.Location)(0x7271fa0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737614163, loc:(*time.Location)(0x7271fa0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737614157, loc:(*time.Location)(0x7271fa0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Oct  6 20:49:31.608: INFO: all replica sets need to contain the pod-template-hash label
Oct  6 20:49:31.609: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737614157, loc:(*time.Location)(0x7271fa0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737614157, loc:(*time.Location)(0x7271fa0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737614163, loc:(*time.Location)(0x7271fa0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737614157, loc:(*time.Location)(0x7271fa0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Oct  6 20:49:33.621: INFO: 
Oct  6 20:49:33.621: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Oct  6 20:49:33.630: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:{test-rollover-deployment  deployment-2127 /apis/apps/v1/namespaces/deployment-2127/deployments/test-rollover-deployment 546e0499-0309-47ee-a789-66b5eeea6d59 3609153 2 2020-10-06 20:49:17 +0000 UTC   map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x40056f4578  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-10-06 20:49:17 +0000 UTC,LastTransitionTime:2020-10-06 20:49:17 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-574d6dfbff" has successfully progressed.,LastUpdateTime:2020-10-06 20:49:33 +0000 UTC,LastTransitionTime:2020-10-06 20:49:17 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

Oct  6 20:49:33.635: INFO: New ReplicaSet "test-rollover-deployment-574d6dfbff" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:{test-rollover-deployment-574d6dfbff  deployment-2127 /apis/apps/v1/namespaces/deployment-2127/replicasets/test-rollover-deployment-574d6dfbff 5beb4622-aeda-4749-9cf1-788af51eae09 3609142 2 2020-10-06 20:49:19 +0000 UTC   map[name:rollover-pod pod-template-hash:574d6dfbff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 546e0499-0309-47ee-a789-66b5eeea6d59 0x40056f4cd7 0x40056f4cd8}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 574d6dfbff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x40056f4d48  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Oct  6 20:49:33.635: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Oct  6 20:49:33.636: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller  deployment-2127 /apis/apps/v1/namespaces/deployment-2127/replicasets/test-rollover-controller 3be8c727-711f-4fd0-a76d-97ca1d3914e1 3609152 2 2020-10-06 20:49:08 +0000 UTC   map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 546e0499-0309-47ee-a789-66b5eeea6d59 0x40056f4bd7 0x40056f4bd8}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0x40056f4c68  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Oct  6 20:49:33.636: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c  deployment-2127 /apis/apps/v1/namespaces/deployment-2127/replicasets/test-rollover-deployment-f6c94f66c 1d2b77dc-5aa1-4027-9c36-6e0e09504ed4 3609092 2 2020-10-06 20:49:17 +0000 UTC   map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 546e0499-0309-47ee-a789-66b5eeea6d59 0x40056f4db0 0x40056f4db1}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] []  []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x40056f4fb8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Oct  6 20:49:33.641: INFO: Pod "test-rollover-deployment-574d6dfbff-ltfqs" is available:
&Pod{ObjectMeta:{test-rollover-deployment-574d6dfbff-ltfqs test-rollover-deployment-574d6dfbff- deployment-2127 /api/v1/namespaces/deployment-2127/pods/test-rollover-deployment-574d6dfbff-ltfqs 508ff9d0-cb9f-460d-97b7-a5d7971b3bec 3609111 0 2020-10-06 20:49:19 +0000 UTC   map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [{apps/v1 ReplicaSet test-rollover-deployment-574d6dfbff 5beb4622-aeda-4749-9cf1-788af51eae09 0x40056f5507 0x40056f5508}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-fvrt7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-fvrt7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-fvrt7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:49:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:49:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:49:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 20:49:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.10,PodIP:10.244.1.85,StartTime:2020-10-06 20:49:19 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-10-06 20:49:23 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://7bd4b6829ca4571942e3c4c070782dce2f0c9d9148259dcd79956197601934d4,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.85,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 20:49:33.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-2127" for this suite.

• [SLOW TEST:25.373 seconds]
[sig-apps] Deployment
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":278,"completed":139,"skipped":2270,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 20:49:33.654: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name projected-secret-test-map-fa5da7ce-1b75-40f7-8934-a12cfa3807d5
STEP: Creating a pod to test consume secrets
Oct  6 20:49:34.566: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-acff5b83-5283-4ace-ab0f-c5b5d121b15f" in namespace "projected-2980" to be "success or failure"
Oct  6 20:49:34.638: INFO: Pod "pod-projected-secrets-acff5b83-5283-4ace-ab0f-c5b5d121b15f": Phase="Pending", Reason="", readiness=false. Elapsed: 72.100606ms
Oct  6 20:49:36.643: INFO: Pod "pod-projected-secrets-acff5b83-5283-4ace-ab0f-c5b5d121b15f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0773352s
Oct  6 20:49:38.649: INFO: Pod "pod-projected-secrets-acff5b83-5283-4ace-ab0f-c5b5d121b15f": Phase="Running", Reason="", readiness=true. Elapsed: 4.083137615s
Oct  6 20:49:40.674: INFO: Pod "pod-projected-secrets-acff5b83-5283-4ace-ab0f-c5b5d121b15f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.107905519s
STEP: Saw pod success
Oct  6 20:49:40.674: INFO: Pod "pod-projected-secrets-acff5b83-5283-4ace-ab0f-c5b5d121b15f" satisfied condition "success or failure"
Oct  6 20:49:41.105: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-acff5b83-5283-4ace-ab0f-c5b5d121b15f container projected-secret-volume-test: 
STEP: delete the pod
Oct  6 20:49:41.209: INFO: Waiting for pod pod-projected-secrets-acff5b83-5283-4ace-ab0f-c5b5d121b15f to disappear
Oct  6 20:49:41.217: INFO: Pod pod-projected-secrets-acff5b83-5283-4ace-ab0f-c5b5d121b15f no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 20:49:41.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2980" for this suite.

• [SLOW TEST:7.573 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":140,"skipped":2290,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 20:49:41.229: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0644 on node default medium
Oct  6 20:49:41.616: INFO: Waiting up to 5m0s for pod "pod-956dba4e-aba2-4ff9-9705-2b74942cb52e" in namespace "emptydir-4667" to be "success or failure"
Oct  6 20:49:41.746: INFO: Pod "pod-956dba4e-aba2-4ff9-9705-2b74942cb52e": Phase="Pending", Reason="", readiness=false. Elapsed: 130.191519ms
Oct  6 20:49:43.751: INFO: Pod "pod-956dba4e-aba2-4ff9-9705-2b74942cb52e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.135644172s
Oct  6 20:49:45.756: INFO: Pod "pod-956dba4e-aba2-4ff9-9705-2b74942cb52e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.140075102s
Oct  6 20:49:48.373: INFO: Pod "pod-956dba4e-aba2-4ff9-9705-2b74942cb52e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.757567105s
Oct  6 20:49:50.381: INFO: Pod "pod-956dba4e-aba2-4ff9-9705-2b74942cb52e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.76526764s
Oct  6 20:49:52.387: INFO: Pod "pod-956dba4e-aba2-4ff9-9705-2b74942cb52e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.771111241s
STEP: Saw pod success
Oct  6 20:49:52.387: INFO: Pod "pod-956dba4e-aba2-4ff9-9705-2b74942cb52e" satisfied condition "success or failure"
Oct  6 20:49:52.391: INFO: Trying to get logs from node jerma-worker pod pod-956dba4e-aba2-4ff9-9705-2b74942cb52e container test-container: 
STEP: delete the pod
Oct  6 20:49:52.430: INFO: Waiting for pod pod-956dba4e-aba2-4ff9-9705-2b74942cb52e to disappear
Oct  6 20:49:52.445: INFO: Pod pod-956dba4e-aba2-4ff9-9705-2b74942cb52e no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 20:49:52.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4667" for this suite.

• [SLOW TEST:11.648 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":141,"skipped":2304,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should scale a replication controller  [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 20:49:52.881: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Update Demo
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:325
[It] should scale a replication controller  [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a replication controller
Oct  6 20:49:53.115: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6189'
Oct  6 20:49:54.811: INFO: stderr: ""
Oct  6 20:49:54.812: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Oct  6 20:49:54.813: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6189'
Oct  6 20:49:56.103: INFO: stderr: ""
Oct  6 20:49:56.103: INFO: stdout: "update-demo-nautilus-f88fr update-demo-nautilus-knrrx "
Oct  6 20:49:56.103: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-f88fr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6189'
Oct  6 20:49:57.367: INFO: stderr: ""
Oct  6 20:49:57.367: INFO: stdout: ""
Oct  6 20:49:57.367: INFO: update-demo-nautilus-f88fr is created but not running
Oct  6 20:50:02.368: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6189'
Oct  6 20:50:03.708: INFO: stderr: ""
Oct  6 20:50:03.708: INFO: stdout: "update-demo-nautilus-f88fr update-demo-nautilus-knrrx "
Oct  6 20:50:03.709: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-f88fr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6189'
Oct  6 20:50:04.951: INFO: stderr: ""
Oct  6 20:50:04.951: INFO: stdout: "true"
Oct  6 20:50:04.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-f88fr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6189'
Oct  6 20:50:06.173: INFO: stderr: ""
Oct  6 20:50:06.173: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Oct  6 20:50:06.173: INFO: validating pod update-demo-nautilus-f88fr
Oct  6 20:50:06.179: INFO: got data: {
  "image": "nautilus.jpg"
}

Oct  6 20:50:06.179: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Oct  6 20:50:06.179: INFO: update-demo-nautilus-f88fr is verified up and running
Oct  6 20:50:06.179: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-knrrx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6189'
Oct  6 20:50:07.463: INFO: stderr: ""
Oct  6 20:50:07.464: INFO: stdout: "true"
Oct  6 20:50:07.464: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-knrrx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6189'
Oct  6 20:50:08.746: INFO: stderr: ""
Oct  6 20:50:08.746: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Oct  6 20:50:08.746: INFO: validating pod update-demo-nautilus-knrrx
Oct  6 20:50:08.751: INFO: got data: {
  "image": "nautilus.jpg"
}

Oct  6 20:50:08.751: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Oct  6 20:50:08.751: INFO: update-demo-nautilus-knrrx is verified up and running
STEP: scaling down the replication controller
Oct  6 20:50:08.758: INFO: scanned /root for discovery docs: 
Oct  6 20:50:08.758: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-6189'
Oct  6 20:50:11.107: INFO: stderr: ""
Oct  6 20:50:11.107: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Oct  6 20:50:11.108: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6189'
Oct  6 20:50:12.482: INFO: stderr: ""
Oct  6 20:50:12.482: INFO: stdout: "update-demo-nautilus-f88fr update-demo-nautilus-knrrx "
STEP: Replicas for name=update-demo: expected=1 actual=2
Oct  6 20:50:17.483: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6189'
Oct  6 20:50:18.749: INFO: stderr: ""
Oct  6 20:50:18.749: INFO: stdout: "update-demo-nautilus-f88fr update-demo-nautilus-knrrx "
STEP: Replicas for name=update-demo: expected=1 actual=2
Oct  6 20:50:23.750: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6189'
Oct  6 20:50:25.020: INFO: stderr: ""
Oct  6 20:50:25.021: INFO: stdout: "update-demo-nautilus-knrrx "
Oct  6 20:50:25.021: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-knrrx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6189'
Oct  6 20:50:26.240: INFO: stderr: ""
Oct  6 20:50:26.241: INFO: stdout: "true"
Oct  6 20:50:26.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-knrrx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6189'
Oct  6 20:50:27.465: INFO: stderr: ""
Oct  6 20:50:27.465: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Oct  6 20:50:27.465: INFO: validating pod update-demo-nautilus-knrrx
Oct  6 20:50:27.471: INFO: got data: {
  "image": "nautilus.jpg"
}

Oct  6 20:50:27.472: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Oct  6 20:50:27.472: INFO: update-demo-nautilus-knrrx is verified up and running
STEP: scaling up the replication controller
Oct  6 20:50:27.480: INFO: scanned /root for discovery docs: 
Oct  6 20:50:27.481: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-6189'
Oct  6 20:50:29.835: INFO: stderr: ""
Oct  6 20:50:29.835: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Oct  6 20:50:29.836: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6189'
Oct  6 20:50:31.219: INFO: stderr: ""
Oct  6 20:50:31.219: INFO: stdout: "update-demo-nautilus-4vl8r update-demo-nautilus-knrrx "
Oct  6 20:50:31.219: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4vl8r -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6189'
Oct  6 20:50:32.415: INFO: stderr: ""
Oct  6 20:50:32.415: INFO: stdout: ""
Oct  6 20:50:32.415: INFO: update-demo-nautilus-4vl8r is created but not running
Oct  6 20:50:37.416: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6189'
Oct  6 20:50:38.697: INFO: stderr: ""
Oct  6 20:50:38.697: INFO: stdout: "update-demo-nautilus-4vl8r update-demo-nautilus-knrrx "
Oct  6 20:50:38.697: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4vl8r -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6189'
Oct  6 20:50:39.957: INFO: stderr: ""
Oct  6 20:50:39.957: INFO: stdout: ""
Oct  6 20:50:39.957: INFO: update-demo-nautilus-4vl8r is created but not running
Oct  6 20:50:44.957: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6189'
Oct  6 20:50:46.243: INFO: stderr: ""
Oct  6 20:50:46.243: INFO: stdout: "update-demo-nautilus-4vl8r update-demo-nautilus-knrrx "
Oct  6 20:50:46.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4vl8r -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6189'
Oct  6 20:50:47.476: INFO: stderr: ""
Oct  6 20:50:47.476: INFO: stdout: "true"
Oct  6 20:50:47.477: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4vl8r -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6189'
Oct  6 20:50:48.709: INFO: stderr: ""
Oct  6 20:50:48.709: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Oct  6 20:50:48.709: INFO: validating pod update-demo-nautilus-4vl8r
Oct  6 20:50:48.715: INFO: got data: {
  "image": "nautilus.jpg"
}

Oct  6 20:50:48.715: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Oct  6 20:50:48.716: INFO: update-demo-nautilus-4vl8r is verified up and running
Oct  6 20:50:48.716: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-knrrx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6189'
Oct  6 20:50:49.981: INFO: stderr: ""
Oct  6 20:50:49.981: INFO: stdout: "true"
Oct  6 20:50:49.982: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-knrrx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6189'
Oct  6 20:50:51.220: INFO: stderr: ""
Oct  6 20:50:51.220: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Oct  6 20:50:51.220: INFO: validating pod update-demo-nautilus-knrrx
Oct  6 20:50:51.231: INFO: got data: {
  "image": "nautilus.jpg"
}

Oct  6 20:50:51.231: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Oct  6 20:50:51.231: INFO: update-demo-nautilus-knrrx is verified up and running
STEP: using delete to clean up resources
Oct  6 20:50:51.231: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6189'
Oct  6 20:50:52.454: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Oct  6 20:50:52.454: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Oct  6 20:50:52.454: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-6189'
Oct  6 20:50:55.057: INFO: stderr: "No resources found in kubectl-6189 namespace.\n"
Oct  6 20:50:55.057: INFO: stdout: ""
Oct  6 20:50:55.057: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-6189 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Oct  6 20:50:56.344: INFO: stderr: ""
Oct  6 20:50:56.345: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 20:50:56.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6189" for this suite.

• [SLOW TEST:63.481 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:323
    should scale a replication controller  [Conformance]
    /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","total":278,"completed":142,"skipped":2340,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 20:50:56.365: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod test-webserver-3eb32322-1d7e-44a9-ad19-2459ef7e6741 in namespace container-probe-9974
Oct  6 20:51:01.396: INFO: Started pod test-webserver-3eb32322-1d7e-44a9-ad19-2459ef7e6741 in namespace container-probe-9974
STEP: checking the pod's current state and verifying that restartCount is present
Oct  6 20:51:01.399: INFO: Initial restart count of pod test-webserver-3eb32322-1d7e-44a9-ad19-2459ef7e6741 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 20:55:02.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-9974" for this suite.

• [SLOW TEST:246.023 seconds]
[k8s.io] Probing container
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":143,"skipped":2367,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 20:55:02.393: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-e5386dbc-5aaa-4769-980e-10746b67d4d9
STEP: Creating a pod to test consume secrets
Oct  6 20:55:02.698: INFO: Waiting up to 5m0s for pod "pod-secrets-d8ff00e5-48ce-4d2a-b3a5-3505e5bfe17e" in namespace "secrets-6365" to be "success or failure"
Oct  6 20:55:02.719: INFO: Pod "pod-secrets-d8ff00e5-48ce-4d2a-b3a5-3505e5bfe17e": Phase="Pending", Reason="", readiness=false. Elapsed: 20.839431ms
Oct  6 20:55:04.731: INFO: Pod "pod-secrets-d8ff00e5-48ce-4d2a-b3a5-3505e5bfe17e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032470463s
Oct  6 20:55:06.739: INFO: Pod "pod-secrets-d8ff00e5-48ce-4d2a-b3a5-3505e5bfe17e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041004687s
Oct  6 20:55:09.044: INFO: Pod "pod-secrets-d8ff00e5-48ce-4d2a-b3a5-3505e5bfe17e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.345217144s
Oct  6 20:55:11.051: INFO: Pod "pod-secrets-d8ff00e5-48ce-4d2a-b3a5-3505e5bfe17e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.352159256s
Oct  6 20:55:13.466: INFO: Pod "pod-secrets-d8ff00e5-48ce-4d2a-b3a5-3505e5bfe17e": Phase="Pending", Reason="", readiness=false. Elapsed: 10.768011913s
Oct  6 20:55:15.474: INFO: Pod "pod-secrets-d8ff00e5-48ce-4d2a-b3a5-3505e5bfe17e": Phase="Pending", Reason="", readiness=false. Elapsed: 12.775280057s
Oct  6 20:55:17.481: INFO: Pod "pod-secrets-d8ff00e5-48ce-4d2a-b3a5-3505e5bfe17e": Phase="Pending", Reason="", readiness=false. Elapsed: 14.783013077s
Oct  6 20:55:19.797: INFO: Pod "pod-secrets-d8ff00e5-48ce-4d2a-b3a5-3505e5bfe17e": Phase="Pending", Reason="", readiness=false. Elapsed: 17.09860646s
Oct  6 20:55:21.965: INFO: Pod "pod-secrets-d8ff00e5-48ce-4d2a-b3a5-3505e5bfe17e": Phase="Pending", Reason="", readiness=false. Elapsed: 19.266704264s
Oct  6 20:55:23.972: INFO: Pod "pod-secrets-d8ff00e5-48ce-4d2a-b3a5-3505e5bfe17e": Phase="Pending", Reason="", readiness=false. Elapsed: 21.273638446s
Oct  6 20:55:25.979: INFO: Pod "pod-secrets-d8ff00e5-48ce-4d2a-b3a5-3505e5bfe17e": Phase="Running", Reason="", readiness=true. Elapsed: 23.280211688s
Oct  6 20:55:27.985: INFO: Pod "pod-secrets-d8ff00e5-48ce-4d2a-b3a5-3505e5bfe17e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 25.286967672s
STEP: Saw pod success
Oct  6 20:55:27.986: INFO: Pod "pod-secrets-d8ff00e5-48ce-4d2a-b3a5-3505e5bfe17e" satisfied condition "success or failure"
Oct  6 20:55:27.990: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-d8ff00e5-48ce-4d2a-b3a5-3505e5bfe17e container secret-volume-test: 
STEP: delete the pod
Oct  6 20:55:28.025: INFO: Waiting for pod pod-secrets-d8ff00e5-48ce-4d2a-b3a5-3505e5bfe17e to disappear
Oct  6 20:55:28.059: INFO: Pod pod-secrets-d8ff00e5-48ce-4d2a-b3a5-3505e5bfe17e no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 20:55:28.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-6365" for this suite.

• [SLOW TEST:25.680 seconds]
[sig-storage] Secrets
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":144,"skipped":2418,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 20:55:28.076: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should serve a basic endpoint from pods  [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating service endpoint-test2 in namespace services-9985
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9985 to expose endpoints map[]
Oct  6 20:55:28.228: INFO: successfully validated that service endpoint-test2 in namespace services-9985 exposes endpoints map[] (29.905532ms elapsed)
STEP: Creating pod pod1 in namespace services-9985
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9985 to expose endpoints map[pod1:[80]]
Oct  6 20:55:33.162: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.92696304s elapsed, will retry)
Oct  6 20:55:35.222: INFO: successfully validated that service endpoint-test2 in namespace services-9985 exposes endpoints map[pod1:[80]] (6.986964588s elapsed)
STEP: Creating pod pod2 in namespace services-9985
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9985 to expose endpoints map[pod1:[80] pod2:[80]]
Oct  6 20:55:39.741: INFO: Unexpected endpoints: found map[509b5ad0-1ea3-4c2f-8d36-8177a391dfd7:[80]], expected map[pod1:[80] pod2:[80]] (4.513008244s elapsed, will retry)
Oct  6 20:55:40.755: INFO: successfully validated that service endpoint-test2 in namespace services-9985 exposes endpoints map[pod1:[80] pod2:[80]] (5.52646147s elapsed)
STEP: Deleting pod pod1 in namespace services-9985
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9985 to expose endpoints map[pod2:[80]]
Oct  6 20:55:40.812: INFO: successfully validated that service endpoint-test2 in namespace services-9985 exposes endpoints map[pod2:[80]] (50.276589ms elapsed)
STEP: Deleting pod pod2 in namespace services-9985
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9985 to expose endpoints map[]
Oct  6 20:55:40.825: INFO: successfully validated that service endpoint-test2 in namespace services-9985 exposes endpoints map[] (6.876316ms elapsed)
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 20:55:40.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-9985" for this suite.
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:12.894 seconds]
[sig-network] Services
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods  [Conformance]","total":278,"completed":145,"skipped":2433,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 20:55:40.971: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a secret. [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Discovering how many secrets are in namespace by default
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Secret
STEP: Ensuring resource quota status captures secret creation
STEP: Deleting a secret
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 20:55:58.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-7874" for this suite.

• [SLOW TEST:17.526 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":278,"completed":146,"skipped":2448,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 20:55:58.499: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod liveness-7fdc0018-cb89-4ea6-8901-ee69aacc3adc in namespace container-probe-4842
Oct  6 20:56:08.586: INFO: Started pod liveness-7fdc0018-cb89-4ea6-8901-ee69aacc3adc in namespace container-probe-4842
STEP: checking the pod's current state and verifying that restartCount is present
Oct  6 20:56:08.589: INFO: Initial restart count of pod liveness-7fdc0018-cb89-4ea6-8901-ee69aacc3adc is 0
Oct  6 20:56:26.731: INFO: Restart count of pod container-probe-4842/liveness-7fdc0018-cb89-4ea6-8901-ee69aacc3adc is now 1 (18.14254619s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 20:56:26.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-4842" for this suite.

• [SLOW TEST:28.311 seconds]
[k8s.io] Probing container
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":147,"skipped":2471,"failed":0}
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 20:56:26.810: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 20:56:31.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-6311" for this suite.

• [SLOW TEST:5.177 seconds]
[k8s.io] Kubelet
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when scheduling a read only busybox container
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187
    should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":148,"skipped":2471,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 20:56:31.990: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should provide secure master service  [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 20:56:32.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-4060" for this suite.
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143
•{"msg":"PASSED [sig-network] Services should provide secure master service  [Conformance]","total":278,"completed":149,"skipped":2495,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 20:56:32.088: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-1548.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-1548.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1548.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-1548.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-1548.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1548.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe /etc/hosts
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Oct  6 20:56:40.262: INFO: DNS probes using dns-1548/dns-test-5c167595-06d0-4905-85fb-4cec123219ba succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 20:56:40.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-1548" for this suite.

• [SLOW TEST:8.252 seconds]
[sig-network] DNS
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":278,"completed":150,"skipped":2511,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 20:56:40.343: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Oct  6 20:56:47.657: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 20:56:47.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-327" for this suite.

• [SLOW TEST:7.369 seconds]
[k8s.io] Container Runtime
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  blackbox test
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":151,"skipped":2537,"failed":0}
SSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 20:56:47.713: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for ExternalName services [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test externalName service
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8676.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-8676.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8676.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-8676.svc.cluster.local; sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Oct  6 20:56:57.883: INFO: DNS probes using dns-test-c91d7d1b-3474-4655-b7aa-9b5bb67d6b6d succeeded

STEP: deleting the pod
STEP: changing the externalName to bar.example.com
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8676.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-8676.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8676.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-8676.svc.cluster.local; sleep 1; done

STEP: creating a second pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Oct  6 20:57:06.816: INFO: File wheezy_udp@dns-test-service-3.dns-8676.svc.cluster.local from pod  dns-8676/dns-test-ccbabfa9-2295-4822-b4c4-39d5f380c8e1 contains 'foo.example.com.
' instead of 'bar.example.com.'
Oct  6 20:57:06.821: INFO: File jessie_udp@dns-test-service-3.dns-8676.svc.cluster.local from pod  dns-8676/dns-test-ccbabfa9-2295-4822-b4c4-39d5f380c8e1 contains 'foo.example.com.
' instead of 'bar.example.com.'
Oct  6 20:57:06.821: INFO: Lookups using dns-8676/dns-test-ccbabfa9-2295-4822-b4c4-39d5f380c8e1 failed for: [wheezy_udp@dns-test-service-3.dns-8676.svc.cluster.local jessie_udp@dns-test-service-3.dns-8676.svc.cluster.local]

Oct  6 20:57:11.826: INFO: File wheezy_udp@dns-test-service-3.dns-8676.svc.cluster.local from pod  dns-8676/dns-test-ccbabfa9-2295-4822-b4c4-39d5f380c8e1 contains 'foo.example.com.
' instead of 'bar.example.com.'
Oct  6 20:57:11.829: INFO: File jessie_udp@dns-test-service-3.dns-8676.svc.cluster.local from pod  dns-8676/dns-test-ccbabfa9-2295-4822-b4c4-39d5f380c8e1 contains 'foo.example.com.
' instead of 'bar.example.com.'
Oct  6 20:57:11.829: INFO: Lookups using dns-8676/dns-test-ccbabfa9-2295-4822-b4c4-39d5f380c8e1 failed for: [wheezy_udp@dns-test-service-3.dns-8676.svc.cluster.local jessie_udp@dns-test-service-3.dns-8676.svc.cluster.local]

Oct  6 20:57:16.826: INFO: File wheezy_udp@dns-test-service-3.dns-8676.svc.cluster.local from pod  dns-8676/dns-test-ccbabfa9-2295-4822-b4c4-39d5f380c8e1 contains 'foo.example.com.
' instead of 'bar.example.com.'
Oct  6 20:57:16.829: INFO: File jessie_udp@dns-test-service-3.dns-8676.svc.cluster.local from pod  dns-8676/dns-test-ccbabfa9-2295-4822-b4c4-39d5f380c8e1 contains 'foo.example.com.
' instead of 'bar.example.com.'
Oct  6 20:57:16.829: INFO: Lookups using dns-8676/dns-test-ccbabfa9-2295-4822-b4c4-39d5f380c8e1 failed for: [wheezy_udp@dns-test-service-3.dns-8676.svc.cluster.local jessie_udp@dns-test-service-3.dns-8676.svc.cluster.local]

Oct  6 20:57:21.827: INFO: File wheezy_udp@dns-test-service-3.dns-8676.svc.cluster.local from pod  dns-8676/dns-test-ccbabfa9-2295-4822-b4c4-39d5f380c8e1 contains 'foo.example.com.
' instead of 'bar.example.com.'
Oct  6 20:57:21.831: INFO: File jessie_udp@dns-test-service-3.dns-8676.svc.cluster.local from pod  dns-8676/dns-test-ccbabfa9-2295-4822-b4c4-39d5f380c8e1 contains 'foo.example.com.
' instead of 'bar.example.com.'
Oct  6 20:57:21.831: INFO: Lookups using dns-8676/dns-test-ccbabfa9-2295-4822-b4c4-39d5f380c8e1 failed for: [wheezy_udp@dns-test-service-3.dns-8676.svc.cluster.local jessie_udp@dns-test-service-3.dns-8676.svc.cluster.local]

Oct  6 20:57:26.830: INFO: DNS probes using dns-test-ccbabfa9-2295-4822-b4c4-39d5f380c8e1 succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8676.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-8676.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8676.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-8676.svc.cluster.local; sleep 1; done

STEP: creating a third pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Oct  6 20:57:40.056: INFO: DNS probes using dns-test-b16f398c-fbb6-4503-9fef-4b93d0919ffa succeeded

STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 20:57:40.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-8676" for this suite.

• [SLOW TEST:53.369 seconds]
[sig-network] DNS
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":278,"completed":152,"skipped":2547,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Job
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 20:57:41.087: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 20:58:06.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-5472" for this suite.

• [SLOW TEST:25.513 seconds]
[sig-apps] Job
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":278,"completed":153,"skipped":2581,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Subpath
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 20:58:06.603: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod pod-subpath-test-configmap-jg62
STEP: Creating a pod to test atomic-volume-subpath
Oct  6 20:58:06.756: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-jg62" in namespace "subpath-9904" to be "success or failure"
Oct  6 20:58:06.775: INFO: Pod "pod-subpath-test-configmap-jg62": Phase="Pending", Reason="", readiness=false. Elapsed: 18.277573ms
Oct  6 20:58:08.979: INFO: Pod "pod-subpath-test-configmap-jg62": Phase="Pending", Reason="", readiness=false. Elapsed: 2.222519782s
Oct  6 20:58:10.984: INFO: Pod "pod-subpath-test-configmap-jg62": Phase="Running", Reason="", readiness=true. Elapsed: 4.226984441s
Oct  6 20:58:12.990: INFO: Pod "pod-subpath-test-configmap-jg62": Phase="Running", Reason="", readiness=true. Elapsed: 6.233628966s
Oct  6 20:58:14.996: INFO: Pod "pod-subpath-test-configmap-jg62": Phase="Running", Reason="", readiness=true. Elapsed: 8.239480145s
Oct  6 20:58:17.002: INFO: Pod "pod-subpath-test-configmap-jg62": Phase="Running", Reason="", readiness=true. Elapsed: 10.245515621s
Oct  6 20:58:19.008: INFO: Pod "pod-subpath-test-configmap-jg62": Phase="Running", Reason="", readiness=true. Elapsed: 12.251186981s
Oct  6 20:58:21.014: INFO: Pod "pod-subpath-test-configmap-jg62": Phase="Running", Reason="", readiness=true. Elapsed: 14.256972799s
Oct  6 20:58:23.020: INFO: Pod "pod-subpath-test-configmap-jg62": Phase="Running", Reason="", readiness=true. Elapsed: 16.262956307s
Oct  6 20:58:25.025: INFO: Pod "pod-subpath-test-configmap-jg62": Phase="Running", Reason="", readiness=true. Elapsed: 18.268765222s
Oct  6 20:58:27.039: INFO: Pod "pod-subpath-test-configmap-jg62": Phase="Running", Reason="", readiness=true. Elapsed: 20.282161349s
Oct  6 20:58:29.045: INFO: Pod "pod-subpath-test-configmap-jg62": Phase="Running", Reason="", readiness=true. Elapsed: 22.288194077s
Oct  6 20:58:31.052: INFO: Pod "pod-subpath-test-configmap-jg62": Phase="Running", Reason="", readiness=true. Elapsed: 24.295492916s
Oct  6 20:58:33.057: INFO: Pod "pod-subpath-test-configmap-jg62": Phase="Running", Reason="", readiness=true. Elapsed: 26.30064119s
Oct  6 20:58:35.063: INFO: Pod "pod-subpath-test-configmap-jg62": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.306496752s
STEP: Saw pod success
Oct  6 20:58:35.063: INFO: Pod "pod-subpath-test-configmap-jg62" satisfied condition "success or failure"
Oct  6 20:58:35.067: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-configmap-jg62 container test-container-subpath-configmap-jg62: 
STEP: delete the pod
Oct  6 20:58:35.094: INFO: Waiting for pod pod-subpath-test-configmap-jg62 to disappear
Oct  6 20:58:35.111: INFO: Pod pod-subpath-test-configmap-jg62 no longer exists
STEP: Deleting pod pod-subpath-test-configmap-jg62
Oct  6 20:58:35.111: INFO: Deleting pod "pod-subpath-test-configmap-jg62" in namespace "subpath-9904"
[AfterEach] [sig-storage] Subpath
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 20:58:35.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-9904" for this suite.

• [SLOW TEST:28.551 seconds]
[sig-storage] Subpath
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod [LinuxOnly] [Conformance]
    /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":278,"completed":154,"skipped":2613,"failed":0}
SSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 20:58:35.155: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should invoke init containers on a RestartNever pod [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
Oct  6 20:58:35.267: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 20:58:43.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-5908" for this suite.

• [SLOW TEST:8.867 seconds]
[k8s.io] InitContainer [NodeConformance]
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should invoke init containers on a RestartNever pod [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":278,"completed":155,"skipped":2618,"failed":0}
SSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 20:58:44.022: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-9c476152-3891-44df-8dfc-ae8bcd1f6759
STEP: Creating a pod to test consume secrets
Oct  6 20:58:45.034: INFO: Waiting up to 5m0s for pod "pod-secrets-211b58ab-8081-4136-943a-7c804d35d438" in namespace "secrets-2669" to be "success or failure"
Oct  6 20:58:45.039: INFO: Pod "pod-secrets-211b58ab-8081-4136-943a-7c804d35d438": Phase="Pending", Reason="", readiness=false. Elapsed: 4.469924ms
Oct  6 20:58:48.142: INFO: Pod "pod-secrets-211b58ab-8081-4136-943a-7c804d35d438": Phase="Pending", Reason="", readiness=false. Elapsed: 3.107204648s
Oct  6 20:58:50.172: INFO: Pod "pod-secrets-211b58ab-8081-4136-943a-7c804d35d438": Phase="Pending", Reason="", readiness=false. Elapsed: 5.13780046s
Oct  6 20:58:52.180: INFO: Pod "pod-secrets-211b58ab-8081-4136-943a-7c804d35d438": Phase="Succeeded", Reason="", readiness=false. Elapsed: 7.145473007s
STEP: Saw pod success
Oct  6 20:58:52.180: INFO: Pod "pod-secrets-211b58ab-8081-4136-943a-7c804d35d438" satisfied condition "success or failure"
Oct  6 20:58:52.185: INFO: Trying to get logs from node jerma-worker pod pod-secrets-211b58ab-8081-4136-943a-7c804d35d438 container secret-volume-test: 
STEP: delete the pod
Oct  6 20:58:52.209: INFO: Waiting for pod pod-secrets-211b58ab-8081-4136-943a-7c804d35d438 to disappear
Oct  6 20:58:52.230: INFO: Pod pod-secrets-211b58ab-8081-4136-943a-7c804d35d438 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 20:58:52.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2669" for this suite.

• [SLOW TEST:8.224 seconds]
[sig-storage] Secrets
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":156,"skipped":2627,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 20:58:52.248: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ConfigMap
STEP: Ensuring resource quota status captures configMap creation
STEP: Deleting a ConfigMap
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 20:59:08.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-4904" for this suite.

• [SLOW TEST:16.138 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":278,"completed":157,"skipped":2636,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert from CR v1 to CR v2 [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 20:59:08.387: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125
STEP: Setting up server cert
STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication
STEP: Deploying the custom resource conversion webhook pod
STEP: Wait for the deployment to be ready
Oct  6 20:59:11.336: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set
Oct  6 20:59:13.766: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737614751, loc:(*time.Location)(0x7271fa0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737614751, loc:(*time.Location)(0x7271fa0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737614751, loc:(*time.Location)(0x7271fa0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737614751, loc:(*time.Location)(0x7271fa0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Oct  6 20:59:15.789: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737614751, loc:(*time.Location)(0x7271fa0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737614751, loc:(*time.Location)(0x7271fa0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737614751, loc:(*time.Location)(0x7271fa0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737614751, loc:(*time.Location)(0x7271fa0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Oct  6 20:59:17.775: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737614751, loc:(*time.Location)(0x7271fa0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737614751, loc:(*time.Location)(0x7271fa0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737614751, loc:(*time.Location)(0x7271fa0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737614751, loc:(*time.Location)(0x7271fa0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Oct  6 20:59:19.772: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737614751, loc:(*time.Location)(0x7271fa0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737614751, loc:(*time.Location)(0x7271fa0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737614751, loc:(*time.Location)(0x7271fa0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737614751, loc:(*time.Location)(0x7271fa0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Oct  6 20:59:22.818: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1
[It] should be able to convert from CR v1 to CR v2 [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Oct  6 20:59:22.825: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating a v1 custom resource
STEP: v2 custom resource should be converted
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 20:59:24.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-8062" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136

• [SLOW TEST:15.791 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert from CR v1 to CR v2 [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":278,"completed":158,"skipped":2641,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 20:59:24.181: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 20:59:30.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-8645" for this suite.

• [SLOW TEST:6.375 seconds]
[sig-storage] EmptyDir wrapper volumes
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should not conflict [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":278,"completed":159,"skipped":2670,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] ConfigMap
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 20:59:30.558: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap that has name configmap-test-emptyKey-67690b25-996e-4fd8-9004-348a8ba5a467
[AfterEach] [sig-node] ConfigMap
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 20:59:30.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1387" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":278,"completed":160,"skipped":2697,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 20:59:30.701: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should add annotations for pods in rc  [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating Agnhost RC
Oct  6 20:59:31.165: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7604'
Oct  6 20:59:35.834: INFO: stderr: ""
Oct  6 20:59:35.834: INFO: stdout: "replicationcontroller/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Oct  6 20:59:36.842: INFO: Selector matched 1 pods for map[app:agnhost]
Oct  6 20:59:36.842: INFO: Found 0 / 1
Oct  6 20:59:38.417: INFO: Selector matched 1 pods for map[app:agnhost]
Oct  6 20:59:38.417: INFO: Found 0 / 1
Oct  6 20:59:39.198: INFO: Selector matched 1 pods for map[app:agnhost]
Oct  6 20:59:39.198: INFO: Found 0 / 1
Oct  6 20:59:40.000: INFO: Selector matched 1 pods for map[app:agnhost]
Oct  6 20:59:40.001: INFO: Found 0 / 1
Oct  6 20:59:40.841: INFO: Selector matched 1 pods for map[app:agnhost]
Oct  6 20:59:40.842: INFO: Found 0 / 1
Oct  6 20:59:41.855: INFO: Selector matched 1 pods for map[app:agnhost]
Oct  6 20:59:41.855: INFO: Found 0 / 1
Oct  6 20:59:43.064: INFO: Selector matched 1 pods for map[app:agnhost]
Oct  6 20:59:43.064: INFO: Found 0 / 1
Oct  6 20:59:44.016: INFO: Selector matched 1 pods for map[app:agnhost]
Oct  6 20:59:44.016: INFO: Found 0 / 1
Oct  6 20:59:44.858: INFO: Selector matched 1 pods for map[app:agnhost]
Oct  6 20:59:44.858: INFO: Found 0 / 1
Oct  6 20:59:45.843: INFO: Selector matched 1 pods for map[app:agnhost]
Oct  6 20:59:45.843: INFO: Found 0 / 1
Oct  6 20:59:46.842: INFO: Selector matched 1 pods for map[app:agnhost]
Oct  6 20:59:46.842: INFO: Found 0 / 1
Oct  6 20:59:47.841: INFO: Selector matched 1 pods for map[app:agnhost]
Oct  6 20:59:47.841: INFO: Found 1 / 1
Oct  6 20:59:47.842: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Oct  6 20:59:47.847: INFO: Selector matched 1 pods for map[app:agnhost]
Oct  6 20:59:47.847: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Oct  6 20:59:47.848: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod agnhost-master-snbvt --namespace=kubectl-7604 -p {"metadata":{"annotations":{"x":"y"}}}'
Oct  6 20:59:49.066: INFO: stderr: ""
Oct  6 20:59:49.067: INFO: stdout: "pod/agnhost-master-snbvt patched\n"
STEP: checking annotations
Oct  6 20:59:49.072: INFO: Selector matched 1 pods for map[app:agnhost]
Oct  6 20:59:49.072: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 20:59:49.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7604" for this suite.

• [SLOW TEST:18.408 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl patch
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1433
    should add annotations for pods in rc  [Conformance]
    /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc  [Conformance]","total":278,"completed":161,"skipped":2712,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 20:59:49.115: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with configMap that has name projected-configmap-test-upd-7843161d-d00b-4bbc-8ac9-5827f1980524
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-7843161d-d00b-4bbc-8ac9-5827f1980524
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 21:01:32.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5059" for this suite.

• [SLOW TEST:103.539 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":162,"skipped":2764,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 21:01:32.657: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-upd-df9c666c-8c29-41ac-b085-0f139fab75b6
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 21:01:47.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7282" for this suite.

• [SLOW TEST:14.482 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":163,"skipped":2826,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicaSet
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 21:01:47.141: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Oct  6 21:01:55.288: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 21:01:55.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-6413" for this suite.

• [SLOW TEST:8.667 seconds]
[sig-apps] ReplicaSet
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":278,"completed":164,"skipped":2848,"failed":0}
SSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 21:01:55.810: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating the pod
Oct  6 21:02:04.064: INFO: Successfully updated pod "annotationupdate58a90cb0-43d0-47fa-9476-673fd9024578"
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 21:02:06.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6610" for this suite.

• [SLOW TEST:10.446 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":165,"skipped":2851,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 21:02:06.257: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 21:02:14.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-5485" for this suite.

• [SLOW TEST:8.620 seconds]
[k8s.io] Kubelet
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when scheduling a busybox command that always fails in a pod
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":278,"completed":166,"skipped":2872,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 21:02:14.882: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir volume type on node default medium
Oct  6 21:02:14.954: INFO: Waiting up to 5m0s for pod "pod-eb1f7271-7637-4ba3-b231-a08d77889229" in namespace "emptydir-9290" to be "success or failure"
Oct  6 21:02:14.998: INFO: Pod "pod-eb1f7271-7637-4ba3-b231-a08d77889229": Phase="Pending", Reason="", readiness=false. Elapsed: 43.829117ms
Oct  6 21:02:17.004: INFO: Pod "pod-eb1f7271-7637-4ba3-b231-a08d77889229": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050093475s
Oct  6 21:02:19.123: INFO: Pod "pod-eb1f7271-7637-4ba3-b231-a08d77889229": Phase="Pending", Reason="", readiness=false. Elapsed: 4.168902641s
Oct  6 21:02:21.138: INFO: Pod "pod-eb1f7271-7637-4ba3-b231-a08d77889229": Phase="Running", Reason="", readiness=true. Elapsed: 6.184620234s
Oct  6 21:02:23.144: INFO: Pod "pod-eb1f7271-7637-4ba3-b231-a08d77889229": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.190598342s
STEP: Saw pod success
Oct  6 21:02:23.145: INFO: Pod "pod-eb1f7271-7637-4ba3-b231-a08d77889229" satisfied condition "success or failure"
Oct  6 21:02:23.182: INFO: Trying to get logs from node jerma-worker pod pod-eb1f7271-7637-4ba3-b231-a08d77889229 container test-container: 
STEP: delete the pod
Oct  6 21:02:23.481: INFO: Waiting for pod pod-eb1f7271-7637-4ba3-b231-a08d77889229 to disappear
Oct  6 21:02:23.672: INFO: Pod pod-eb1f7271-7637-4ba3-b231-a08d77889229 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 21:02:23.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9290" for this suite.

• [SLOW TEST:8.844 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":167,"skipped":2919,"failed":0}
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 21:02:23.728: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Update Demo
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:325
[It] should do a rolling update of a replication controller  [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the initial replication controller
Oct  6 21:02:24.327: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1648'
Oct  6 21:02:25.923: INFO: stderr: ""
Oct  6 21:02:25.923: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Oct  6 21:02:25.924: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1648'
Oct  6 21:02:27.183: INFO: stderr: ""
Oct  6 21:02:27.184: INFO: stdout: "update-demo-nautilus-25dvj update-demo-nautilus-czhgk "
Oct  6 21:02:27.184: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-25dvj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1648'
Oct  6 21:02:28.882: INFO: stderr: ""
Oct  6 21:02:28.882: INFO: stdout: ""
Oct  6 21:02:28.882: INFO: update-demo-nautilus-25dvj is created but not running
Oct  6 21:02:33.883: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1648'
Oct  6 21:02:35.136: INFO: stderr: ""
Oct  6 21:02:35.136: INFO: stdout: "update-demo-nautilus-25dvj update-demo-nautilus-czhgk "
Oct  6 21:02:35.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-25dvj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1648'
Oct  6 21:02:36.372: INFO: stderr: ""
Oct  6 21:02:36.372: INFO: stdout: "true"
Oct  6 21:02:36.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-25dvj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1648'
Oct  6 21:02:37.607: INFO: stderr: ""
Oct  6 21:02:37.607: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Oct  6 21:02:37.607: INFO: validating pod update-demo-nautilus-25dvj
Oct  6 21:02:37.612: INFO: got data: {
  "image": "nautilus.jpg"
}

Oct  6 21:02:37.612: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Oct  6 21:02:37.612: INFO: update-demo-nautilus-25dvj is verified up and running
Oct  6 21:02:37.612: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-czhgk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1648'
Oct  6 21:02:38.829: INFO: stderr: ""
Oct  6 21:02:38.830: INFO: stdout: "true"
Oct  6 21:02:38.830: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-czhgk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1648'
Oct  6 21:02:40.081: INFO: stderr: ""
Oct  6 21:02:40.081: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Oct  6 21:02:40.082: INFO: validating pod update-demo-nautilus-czhgk
Oct  6 21:02:40.088: INFO: got data: {
  "image": "nautilus.jpg"
}

Oct  6 21:02:40.088: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Oct  6 21:02:40.088: INFO: update-demo-nautilus-czhgk is verified up and running
STEP: rolling-update to new replication controller
Oct  6 21:02:40.098: INFO: scanned /root for discovery docs: 
Oct  6 21:02:40.098: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-1648'
Oct  6 21:03:10.167: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Oct  6 21:03:10.167: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Oct  6 21:03:10.168: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1648'
Oct  6 21:03:11.405: INFO: stderr: ""
Oct  6 21:03:11.405: INFO: stdout: "update-demo-kitten-5wjz8 update-demo-kitten-v22fx "
Oct  6 21:03:11.405: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-5wjz8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1648'
Oct  6 21:03:12.605: INFO: stderr: ""
Oct  6 21:03:12.605: INFO: stdout: "true"
Oct  6 21:03:12.605: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-5wjz8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1648'
Oct  6 21:03:13.840: INFO: stderr: ""
Oct  6 21:03:13.840: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Oct  6 21:03:13.840: INFO: validating pod update-demo-kitten-5wjz8
Oct  6 21:03:13.847: INFO: got data: {
  "image": "kitten.jpg"
}

Oct  6 21:03:13.847: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Oct  6 21:03:13.847: INFO: update-demo-kitten-5wjz8 is verified up and running
Oct  6 21:03:13.847: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-v22fx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1648'
Oct  6 21:03:15.098: INFO: stderr: ""
Oct  6 21:03:15.098: INFO: stdout: "true"
Oct  6 21:03:15.098: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-v22fx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1648'
Oct  6 21:03:16.391: INFO: stderr: ""
Oct  6 21:03:16.391: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Oct  6 21:03:16.391: INFO: validating pod update-demo-kitten-v22fx
Oct  6 21:03:16.405: INFO: got data: {
  "image": "kitten.jpg"
}

Oct  6 21:03:16.406: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Oct  6 21:03:16.406: INFO: update-demo-kitten-v22fx is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 21:03:16.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1648" for this suite.

• [SLOW TEST:52.694 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:323
    should do a rolling update of a replication controller  [Conformance]
    /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller  [Conformance]","total":278,"completed":168,"skipped":2928,"failed":0}
S
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  removes definition from spec when one version gets changed to not be served [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 21:03:16.423: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] removes definition from spec when one version gets changed to not be served [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: set up a multi version CRD
Oct  6 21:03:16.510: INFO: >>> kubeConfig: /root/.kube/config
STEP: mark a version not serverd
STEP: check the unserved version gets removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 21:04:41.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-4280" for this suite.

• [SLOW TEST:85.014 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  removes definition from spec when one version gets changed to not be served [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":278,"completed":169,"skipped":2929,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should honor timeout [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 21:04:41.438: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Oct  6 21:04:46.476: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Oct  6 21:04:48.550: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737615086, loc:(*time.Location)(0x7271fa0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737615086, loc:(*time.Location)(0x7271fa0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737615086, loc:(*time.Location)(0x7271fa0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737615086, loc:(*time.Location)(0x7271fa0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Oct  6 21:04:51.598: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should honor timeout [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Setting timeout (1s) shorter than webhook latency (5s)
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s)
STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is longer than webhook latency
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is empty (defaulted to 10s in v1)
STEP: Registering slow webhook via the AdmissionRegistration API
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 21:05:03.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-6678" for this suite.
STEP: Destroying namespace "webhook-6678-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:22.516 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should honor timeout [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":278,"completed":170,"skipped":2950,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 21:05:03.957: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should get a host IP [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating pod
Oct  6 21:05:08.092: INFO: Pod pod-hostip-8dc2a1ca-856c-4ec6-a7d5-53404845f1ab has hostIP: 172.18.0.9
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 21:05:08.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5390" for this suite.
•{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":278,"completed":171,"skipped":2965,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] version v1
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 21:05:08.120: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-dzw6x in namespace proxy-1693
I1006 21:05:08.289927       7 runners.go:189] Created replication controller with name: proxy-service-dzw6x, namespace: proxy-1693, replica count: 1
I1006 21:05:09.342078       7 runners.go:189] proxy-service-dzw6x Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1006 21:05:10.342817       7 runners.go:189] proxy-service-dzw6x Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1006 21:05:11.343543       7 runners.go:189] proxy-service-dzw6x Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1006 21:05:12.344521       7 runners.go:189] proxy-service-dzw6x Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I1006 21:05:13.345352       7 runners.go:189] proxy-service-dzw6x Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I1006 21:05:14.346107       7 runners.go:189] proxy-service-dzw6x Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I1006 21:05:15.347031       7 runners.go:189] proxy-service-dzw6x Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I1006 21:05:16.347763       7 runners.go:189] proxy-service-dzw6x Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Oct  6 21:05:16.359: INFO: setup took 8.149320879s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Oct  6 21:05:16.371: INFO: (0) /api/v1/namespaces/proxy-1693/pods/proxy-service-dzw6x-dpvtz/proxy/: test (200; 10.263528ms)
Oct  6 21:05:16.371: INFO: (0) /api/v1/namespaces/proxy-1693/pods/proxy-service-dzw6x-dpvtz:1080/proxy/: test<... (200; 10.449798ms)
Oct  6 21:05:16.375: INFO: (0) /api/v1/namespaces/proxy-1693/services/http:proxy-service-dzw6x:portname2/proxy/: bar (200; 14.788609ms)
Oct  6 21:05:16.377: INFO: (0) /api/v1/namespaces/proxy-1693/pods/http:proxy-service-dzw6x-dpvtz:162/proxy/: bar (200; 16.695481ms)
Oct  6 21:05:16.378: INFO: (0) /api/v1/namespaces/proxy-1693/pods/http:proxy-service-dzw6x-dpvtz:160/proxy/: foo (200; 16.460576ms)
Oct  6 21:05:16.378: INFO: (0) /api/v1/namespaces/proxy-1693/pods/proxy-service-dzw6x-dpvtz:160/proxy/: foo (200; 16.669465ms)
Oct  6 21:05:16.378: INFO: (0) /api/v1/namespaces/proxy-1693/services/http:proxy-service-dzw6x:portname1/proxy/: foo (200; 17.614517ms)
Oct  6 21:05:16.378: INFO: (0) /api/v1/namespaces/proxy-1693/pods/proxy-service-dzw6x-dpvtz:162/proxy/: bar (200; 16.83534ms)
Oct  6 21:05:16.378: INFO: (0) /api/v1/namespaces/proxy-1693/pods/http:proxy-service-dzw6x-dpvtz:1080/proxy/: ... (200; 17.124578ms)
Oct  6 21:05:16.378: INFO: (0) /api/v1/namespaces/proxy-1693/services/proxy-service-dzw6x:portname2/proxy/: bar (200; 17.819519ms)
Oct  6 21:05:16.378: INFO: (0) /api/v1/namespaces/proxy-1693/services/https:proxy-service-dzw6x:tlsportname2/proxy/: tls qux (200; 17.388391ms)
Oct  6 21:05:16.379: INFO: (0) /api/v1/namespaces/proxy-1693/pods/https:proxy-service-dzw6x-dpvtz:462/proxy/: tls qux (200; 18.06173ms)
Oct  6 21:05:16.379: INFO: (0) /api/v1/namespaces/proxy-1693/services/proxy-service-dzw6x:portname1/proxy/: foo (200; 17.837507ms)
Oct  6 21:05:16.382: INFO: (0) /api/v1/namespaces/proxy-1693/pods/https:proxy-service-dzw6x-dpvtz:460/proxy/: tls baz (200; 21.159389ms)
Oct  6 21:05:16.382: INFO: (0) /api/v1/namespaces/proxy-1693/services/https:proxy-service-dzw6x:tlsportname1/proxy/: tls baz (200; 21.179258ms)
Oct  6 21:05:16.382: INFO: (0) /api/v1/namespaces/proxy-1693/pods/https:proxy-service-dzw6x-dpvtz:443/proxy/: test (200; 5.118754ms)
Oct  6 21:05:16.388: INFO: (1) /api/v1/namespaces/proxy-1693/pods/http:proxy-service-dzw6x-dpvtz:1080/proxy/: ... (200; 5.348933ms)
Oct  6 21:05:16.388: INFO: (1) /api/v1/namespaces/proxy-1693/pods/https:proxy-service-dzw6x-dpvtz:462/proxy/: tls qux (200; 5.40361ms)
Oct  6 21:05:16.388: INFO: (1) /api/v1/namespaces/proxy-1693/services/proxy-service-dzw6x:portname1/proxy/: foo (200; 5.609911ms)
Oct  6 21:05:16.388: INFO: (1) /api/v1/namespaces/proxy-1693/pods/https:proxy-service-dzw6x-dpvtz:460/proxy/: tls baz (200; 5.951369ms)
Oct  6 21:05:16.389: INFO: (1) /api/v1/namespaces/proxy-1693/services/https:proxy-service-dzw6x:tlsportname1/proxy/: tls baz (200; 6.073026ms)
Oct  6 21:05:16.389: INFO: (1) /api/v1/namespaces/proxy-1693/pods/proxy-service-dzw6x-dpvtz:162/proxy/: bar (200; 5.901917ms)
Oct  6 21:05:16.389: INFO: (1) /api/v1/namespaces/proxy-1693/pods/proxy-service-dzw6x-dpvtz:1080/proxy/: test<... (200; 6.241482ms)
Oct  6 21:05:16.389: INFO: (1) /api/v1/namespaces/proxy-1693/services/https:proxy-service-dzw6x:tlsportname2/proxy/: tls qux (200; 6.342774ms)
Oct  6 21:05:16.389: INFO: (1) /api/v1/namespaces/proxy-1693/pods/http:proxy-service-dzw6x-dpvtz:162/proxy/: bar (200; 6.327951ms)
Oct  6 21:05:16.389: INFO: (1) /api/v1/namespaces/proxy-1693/pods/https:proxy-service-dzw6x-dpvtz:443/proxy/: test<... (200; 6.32774ms)
Oct  6 21:05:16.397: INFO: (2) /api/v1/namespaces/proxy-1693/services/http:proxy-service-dzw6x:portname2/proxy/: bar (200; 6.536695ms)
Oct  6 21:05:16.397: INFO: (2) /api/v1/namespaces/proxy-1693/pods/proxy-service-dzw6x-dpvtz:160/proxy/: foo (200; 6.438496ms)
Oct  6 21:05:16.398: INFO: (2) /api/v1/namespaces/proxy-1693/services/http:proxy-service-dzw6x:portname1/proxy/: foo (200; 6.910138ms)
Oct  6 21:05:16.398: INFO: (2) /api/v1/namespaces/proxy-1693/pods/proxy-service-dzw6x-dpvtz/proxy/: test (200; 6.932099ms)
Oct  6 21:05:16.398: INFO: (2) /api/v1/namespaces/proxy-1693/services/https:proxy-service-dzw6x:tlsportname1/proxy/: tls baz (200; 6.842979ms)
Oct  6 21:05:16.398: INFO: (2) /api/v1/namespaces/proxy-1693/services/proxy-service-dzw6x:portname1/proxy/: foo (200; 7.01915ms)
Oct  6 21:05:16.398: INFO: (2) /api/v1/namespaces/proxy-1693/services/proxy-service-dzw6x:portname2/proxy/: bar (200; 7.152465ms)
Oct  6 21:05:16.398: INFO: (2) /api/v1/namespaces/proxy-1693/pods/http:proxy-service-dzw6x-dpvtz:1080/proxy/: ... (200; 6.997338ms)
Oct  6 21:05:16.402: INFO: (3) /api/v1/namespaces/proxy-1693/pods/proxy-service-dzw6x-dpvtz:1080/proxy/: test<... (200; 3.817261ms)
Oct  6 21:05:16.403: INFO: (3) /api/v1/namespaces/proxy-1693/pods/http:proxy-service-dzw6x-dpvtz:162/proxy/: bar (200; 4.041131ms)
Oct  6 21:05:16.403: INFO: (3) /api/v1/namespaces/proxy-1693/pods/proxy-service-dzw6x-dpvtz:162/proxy/: bar (200; 4.147343ms)
Oct  6 21:05:16.404: INFO: (3) /api/v1/namespaces/proxy-1693/services/proxy-service-dzw6x:portname2/proxy/: bar (200; 5.336092ms)
Oct  6 21:05:16.404: INFO: (3) /api/v1/namespaces/proxy-1693/pods/http:proxy-service-dzw6x-dpvtz:160/proxy/: foo (200; 5.701775ms)
Oct  6 21:05:16.404: INFO: (3) /api/v1/namespaces/proxy-1693/services/http:proxy-service-dzw6x:portname2/proxy/: bar (200; 5.919496ms)
Oct  6 21:05:16.404: INFO: (3) /api/v1/namespaces/proxy-1693/pods/https:proxy-service-dzw6x-dpvtz:462/proxy/: tls qux (200; 5.942916ms)
Oct  6 21:05:16.404: INFO: (3) /api/v1/namespaces/proxy-1693/pods/proxy-service-dzw6x-dpvtz/proxy/: test (200; 6.094666ms)
Oct  6 21:05:16.405: INFO: (3) /api/v1/namespaces/proxy-1693/services/proxy-service-dzw6x:portname1/proxy/: foo (200; 5.992992ms)
Oct  6 21:05:16.405: INFO: (3) /api/v1/namespaces/proxy-1693/services/http:proxy-service-dzw6x:portname1/proxy/: foo (200; 6.235635ms)
Oct  6 21:05:16.405: INFO: (3) /api/v1/namespaces/proxy-1693/pods/https:proxy-service-dzw6x-dpvtz:443/proxy/: ... (200; 7.715618ms)
Oct  6 21:05:16.406: INFO: (3) /api/v1/namespaces/proxy-1693/services/https:proxy-service-dzw6x:tlsportname1/proxy/: tls baz (200; 7.808679ms)
Oct  6 21:05:16.406: INFO: (3) /api/v1/namespaces/proxy-1693/services/https:proxy-service-dzw6x:tlsportname2/proxy/: tls qux (200; 7.6346ms)
Oct  6 21:05:16.406: INFO: (3) /api/v1/namespaces/proxy-1693/pods/https:proxy-service-dzw6x-dpvtz:460/proxy/: tls baz (200; 7.644068ms)
Oct  6 21:05:16.410: INFO: (4) /api/v1/namespaces/proxy-1693/pods/https:proxy-service-dzw6x-dpvtz:443/proxy/: test<... (200; 4.49586ms)
Oct  6 21:05:16.411: INFO: (4) /api/v1/namespaces/proxy-1693/pods/proxy-service-dzw6x-dpvtz:162/proxy/: bar (200; 4.733631ms)
Oct  6 21:05:16.411: INFO: (4) /api/v1/namespaces/proxy-1693/pods/https:proxy-service-dzw6x-dpvtz:460/proxy/: tls baz (200; 4.901085ms)
Oct  6 21:05:16.412: INFO: (4) /api/v1/namespaces/proxy-1693/services/proxy-service-dzw6x:portname2/proxy/: bar (200; 5.019426ms)
Oct  6 21:05:16.412: INFO: (4) /api/v1/namespaces/proxy-1693/pods/http:proxy-service-dzw6x-dpvtz:162/proxy/: bar (200; 4.896831ms)
Oct  6 21:05:16.412: INFO: (4) /api/v1/namespaces/proxy-1693/pods/http:proxy-service-dzw6x-dpvtz:1080/proxy/: ... (200; 4.953823ms)
Oct  6 21:05:16.412: INFO: (4) /api/v1/namespaces/proxy-1693/pods/https:proxy-service-dzw6x-dpvtz:462/proxy/: tls qux (200; 5.023931ms)
Oct  6 21:05:16.412: INFO: (4) /api/v1/namespaces/proxy-1693/pods/proxy-service-dzw6x-dpvtz/proxy/: test (200; 5.189296ms)
Oct  6 21:05:16.413: INFO: (4) /api/v1/namespaces/proxy-1693/services/https:proxy-service-dzw6x:tlsportname2/proxy/: tls qux (200; 6.056894ms)
Oct  6 21:05:16.413: INFO: (4) /api/v1/namespaces/proxy-1693/services/http:proxy-service-dzw6x:portname1/proxy/: foo (200; 6.130851ms)
Oct  6 21:05:16.413: INFO: (4) /api/v1/namespaces/proxy-1693/services/http:proxy-service-dzw6x:portname2/proxy/: bar (200; 6.15147ms)
Oct  6 21:05:16.413: INFO: (4) /api/v1/namespaces/proxy-1693/services/https:proxy-service-dzw6x:tlsportname1/proxy/: tls baz (200; 6.391424ms)
Oct  6 21:05:16.413: INFO: (4) /api/v1/namespaces/proxy-1693/services/proxy-service-dzw6x:portname1/proxy/: foo (200; 6.256754ms)
Oct  6 21:05:16.413: INFO: (4) /api/v1/namespaces/proxy-1693/pods/proxy-service-dzw6x-dpvtz:160/proxy/: foo (200; 6.168882ms)
Oct  6 21:05:16.413: INFO: (4) /api/v1/namespaces/proxy-1693/pods/http:proxy-service-dzw6x-dpvtz:160/proxy/: foo (200; 6.198211ms)
Oct  6 21:05:16.417: INFO: (5) /api/v1/namespaces/proxy-1693/pods/proxy-service-dzw6x-dpvtz:160/proxy/: foo (200; 2.931289ms)
Oct  6 21:05:16.417: INFO: (5) /api/v1/namespaces/proxy-1693/pods/https:proxy-service-dzw6x-dpvtz:462/proxy/: tls qux (200; 3.004564ms)
Oct  6 21:05:16.417: INFO: (5) /api/v1/namespaces/proxy-1693/pods/https:proxy-service-dzw6x-dpvtz:443/proxy/: test (200; 5.958943ms)
Oct  6 21:05:16.420: INFO: (5) /api/v1/namespaces/proxy-1693/pods/http:proxy-service-dzw6x-dpvtz:160/proxy/: foo (200; 5.957962ms)
Oct  6 21:05:16.420: INFO: (5) /api/v1/namespaces/proxy-1693/pods/http:proxy-service-dzw6x-dpvtz:1080/proxy/: ... (200; 6.411899ms)
Oct  6 21:05:16.420: INFO: (5) /api/v1/namespaces/proxy-1693/pods/http:proxy-service-dzw6x-dpvtz:162/proxy/: bar (200; 6.578343ms)
Oct  6 21:05:16.420: INFO: (5) /api/v1/namespaces/proxy-1693/services/http:proxy-service-dzw6x:portname2/proxy/: bar (200; 6.446254ms)
Oct  6 21:05:16.420: INFO: (5) /api/v1/namespaces/proxy-1693/services/http:proxy-service-dzw6x:portname1/proxy/: foo (200; 6.93162ms)
Oct  6 21:05:16.420: INFO: (5) /api/v1/namespaces/proxy-1693/pods/proxy-service-dzw6x-dpvtz:1080/proxy/: test<... (200; 6.644314ms)
Oct  6 21:05:16.421: INFO: (5) /api/v1/namespaces/proxy-1693/services/https:proxy-service-dzw6x:tlsportname2/proxy/: tls qux (200; 6.869042ms)
Oct  6 21:05:16.424: INFO: (6) /api/v1/namespaces/proxy-1693/pods/https:proxy-service-dzw6x-dpvtz:443/proxy/: test<... (200; 6.758771ms)
Oct  6 21:05:16.428: INFO: (6) /api/v1/namespaces/proxy-1693/services/proxy-service-dzw6x:portname2/proxy/: bar (200; 7.101733ms)
Oct  6 21:05:16.428: INFO: (6) /api/v1/namespaces/proxy-1693/pods/http:proxy-service-dzw6x-dpvtz:1080/proxy/: ... (200; 6.873581ms)
Oct  6 21:05:16.428: INFO: (6) /api/v1/namespaces/proxy-1693/services/https:proxy-service-dzw6x:tlsportname2/proxy/: tls qux (200; 7.373512ms)
Oct  6 21:05:16.428: INFO: (6) /api/v1/namespaces/proxy-1693/services/http:proxy-service-dzw6x:portname1/proxy/: foo (200; 7.187857ms)
Oct  6 21:05:16.429: INFO: (6) /api/v1/namespaces/proxy-1693/services/http:proxy-service-dzw6x:portname2/proxy/: bar (200; 7.318746ms)
Oct  6 21:05:16.429: INFO: (6) /api/v1/namespaces/proxy-1693/pods/http:proxy-service-dzw6x-dpvtz:162/proxy/: bar (200; 7.781182ms)
Oct  6 21:05:16.429: INFO: (6) /api/v1/namespaces/proxy-1693/pods/proxy-service-dzw6x-dpvtz/proxy/: test (200; 7.67172ms)
Oct  6 21:05:16.432: INFO: (7) /api/v1/namespaces/proxy-1693/pods/http:proxy-service-dzw6x-dpvtz:160/proxy/: foo (200; 3.614468ms)
Oct  6 21:05:16.433: INFO: (7) /api/v1/namespaces/proxy-1693/pods/proxy-service-dzw6x-dpvtz:162/proxy/: bar (200; 3.525623ms)
Oct  6 21:05:16.433: INFO: (7) /api/v1/namespaces/proxy-1693/services/proxy-service-dzw6x:portname1/proxy/: foo (200; 4.00324ms)
Oct  6 21:05:16.433: INFO: (7) /api/v1/namespaces/proxy-1693/pods/proxy-service-dzw6x-dpvtz:160/proxy/: foo (200; 4.059335ms)
Oct  6 21:05:16.433: INFO: (7) /api/v1/namespaces/proxy-1693/pods/http:proxy-service-dzw6x-dpvtz:1080/proxy/: ... (200; 3.703803ms)
Oct  6 21:05:16.433: INFO: (7) /api/v1/namespaces/proxy-1693/pods/https:proxy-service-dzw6x-dpvtz:460/proxy/: tls baz (200; 4.277599ms)
Oct  6 21:05:16.433: INFO: (7) /api/v1/namespaces/proxy-1693/pods/https:proxy-service-dzw6x-dpvtz:443/proxy/: test<... (200; 4.186444ms)
Oct  6 21:05:16.434: INFO: (7) /api/v1/namespaces/proxy-1693/pods/http:proxy-service-dzw6x-dpvtz:162/proxy/: bar (200; 5.230518ms)
Oct  6 21:05:16.435: INFO: (7) /api/v1/namespaces/proxy-1693/services/http:proxy-service-dzw6x:portname2/proxy/: bar (200; 5.305026ms)
Oct  6 21:05:16.435: INFO: (7) /api/v1/namespaces/proxy-1693/services/http:proxy-service-dzw6x:portname1/proxy/: foo (200; 5.963267ms)
Oct  6 21:05:16.435: INFO: (7) /api/v1/namespaces/proxy-1693/pods/proxy-service-dzw6x-dpvtz/proxy/: test (200; 5.783833ms)
Oct  6 21:05:16.435: INFO: (7) /api/v1/namespaces/proxy-1693/pods/https:proxy-service-dzw6x-dpvtz:462/proxy/: tls qux (200; 5.611194ms)
Oct  6 21:05:16.435: INFO: (7) /api/v1/namespaces/proxy-1693/services/proxy-service-dzw6x:portname2/proxy/: bar (200; 6.009844ms)
Oct  6 21:05:16.435: INFO: (7) /api/v1/namespaces/proxy-1693/services/https:proxy-service-dzw6x:tlsportname2/proxy/: tls qux (200; 6.430384ms)
Oct  6 21:05:16.435: INFO: (7) /api/v1/namespaces/proxy-1693/services/https:proxy-service-dzw6x:tlsportname1/proxy/: tls baz (200; 5.949732ms)
Oct  6 21:05:16.439: INFO: (8) /api/v1/namespaces/proxy-1693/pods/proxy-service-dzw6x-dpvtz:1080/proxy/: test<... (200; 3.209707ms)
Oct  6 21:05:16.439: INFO: (8) /api/v1/namespaces/proxy-1693/pods/https:proxy-service-dzw6x-dpvtz:460/proxy/: tls baz (200; 3.388324ms)
Oct  6 21:05:16.440: INFO: (8) /api/v1/namespaces/proxy-1693/pods/http:proxy-service-dzw6x-dpvtz:160/proxy/: foo (200; 4.739377ms)
Oct  6 21:05:16.440: INFO: (8) /api/v1/namespaces/proxy-1693/pods/https:proxy-service-dzw6x-dpvtz:443/proxy/: ... (200; 6.660009ms)
Oct  6 21:05:16.443: INFO: (8) /api/v1/namespaces/proxy-1693/pods/proxy-service-dzw6x-dpvtz:162/proxy/: bar (200; 6.798262ms)
Oct  6 21:05:16.443: INFO: (8) /api/v1/namespaces/proxy-1693/pods/https:proxy-service-dzw6x-dpvtz:462/proxy/: tls qux (200; 6.897457ms)
Oct  6 21:05:16.443: INFO: (8) /api/v1/namespaces/proxy-1693/services/proxy-service-dzw6x:portname1/proxy/: foo (200; 7.13131ms)
Oct  6 21:05:16.443: INFO: (8) /api/v1/namespaces/proxy-1693/services/http:proxy-service-dzw6x:portname1/proxy/: foo (200; 7.073918ms)
Oct  6 21:05:16.443: INFO: (8) /api/v1/namespaces/proxy-1693/services/http:proxy-service-dzw6x:portname2/proxy/: bar (200; 7.51772ms)
Oct  6 21:05:16.443: INFO: (8) /api/v1/namespaces/proxy-1693/pods/proxy-service-dzw6x-dpvtz/proxy/: test (200; 7.412048ms)
Oct  6 21:05:16.447: INFO: (9) /api/v1/namespaces/proxy-1693/pods/http:proxy-service-dzw6x-dpvtz:1080/proxy/: ... (200; 4.202985ms)
Oct  6 21:05:16.448: INFO: (9) /api/v1/namespaces/proxy-1693/pods/https:proxy-service-dzw6x-dpvtz:460/proxy/: tls baz (200; 4.34169ms)
Oct  6 21:05:16.448: INFO: (9) /api/v1/namespaces/proxy-1693/services/proxy-service-dzw6x:portname1/proxy/: foo (200; 4.450719ms)
Oct  6 21:05:16.448: INFO: (9) /api/v1/namespaces/proxy-1693/services/https:proxy-service-dzw6x:tlsportname1/proxy/: tls baz (200; 4.55027ms)
Oct  6 21:05:16.448: INFO: (9) /api/v1/namespaces/proxy-1693/pods/proxy-service-dzw6x-dpvtz/proxy/: test (200; 4.102412ms)
Oct  6 21:05:16.448: INFO: (9) /api/v1/namespaces/proxy-1693/pods/proxy-service-dzw6x-dpvtz:160/proxy/: foo (200; 4.535121ms)
Oct  6 21:05:16.448: INFO: (9) /api/v1/namespaces/proxy-1693/services/https:proxy-service-dzw6x:tlsportname2/proxy/: tls qux (200; 4.755114ms)
Oct  6 21:05:16.449: INFO: (9) /api/v1/namespaces/proxy-1693/pods/proxy-service-dzw6x-dpvtz:1080/proxy/: test<... (200; 5.545438ms)
Oct  6 21:05:16.450: INFO: (9) /api/v1/namespaces/proxy-1693/pods/http:proxy-service-dzw6x-dpvtz:162/proxy/: bar (200; 6.119295ms)
Oct  6 21:05:16.450: INFO: (9) /api/v1/namespaces/proxy-1693/pods/proxy-service-dzw6x-dpvtz:162/proxy/: bar (200; 6.33726ms)
Oct  6 21:05:16.450: INFO: (9) /api/v1/namespaces/proxy-1693/pods/http:proxy-service-dzw6x-dpvtz:160/proxy/: foo (200; 6.441282ms)
Oct  6 21:05:16.450: INFO: (9) /api/v1/namespaces/proxy-1693/pods/https:proxy-service-dzw6x-dpvtz:443/proxy/: test (200; 7.495734ms)
Oct  6 21:05:16.460: INFO: (10) /api/v1/namespaces/proxy-1693/pods/https:proxy-service-dzw6x-dpvtz:460/proxy/: tls baz (200; 8.359343ms)
Oct  6 21:05:16.460: INFO: (10) /api/v1/namespaces/proxy-1693/pods/http:proxy-service-dzw6x-dpvtz:1080/proxy/: ... (200; 8.659193ms)
Oct  6 21:05:16.460: INFO: (10) /api/v1/namespaces/proxy-1693/pods/proxy-service-dzw6x-dpvtz:1080/proxy/: test<... (200; 8.815877ms)
Oct  6 21:05:16.460: INFO: (10) /api/v1/namespaces/proxy-1693/pods/https:proxy-service-dzw6x-dpvtz:443/proxy/: test<... (200; 4.043821ms)
Oct  6 21:05:16.469: INFO: (11) /api/v1/namespaces/proxy-1693/services/http:proxy-service-dzw6x:portname1/proxy/: foo (200; 4.612384ms)
Oct  6 21:05:16.469: INFO: (11) /api/v1/namespaces/proxy-1693/pods/https:proxy-service-dzw6x-dpvtz:462/proxy/: tls qux (200; 4.621716ms)
Oct  6 21:05:16.469: INFO: (11) /api/v1/namespaces/proxy-1693/pods/https:proxy-service-dzw6x-dpvtz:443/proxy/: ... (200; 5.253819ms)
Oct  6 21:05:16.469: INFO: (11) /api/v1/namespaces/proxy-1693/services/proxy-service-dzw6x:portname1/proxy/: foo (200; 5.334745ms)
Oct  6 21:05:16.469: INFO: (11) /api/v1/namespaces/proxy-1693/pods/proxy-service-dzw6x-dpvtz/proxy/: test (200; 5.237098ms)
Oct  6 21:05:16.469: INFO: (11) /api/v1/namespaces/proxy-1693/services/https:proxy-service-dzw6x:tlsportname1/proxy/: tls baz (200; 5.182653ms)
Oct  6 21:05:16.470: INFO: (11) /api/v1/namespaces/proxy-1693/pods/proxy-service-dzw6x-dpvtz:160/proxy/: foo (200; 5.419577ms)
Oct  6 21:05:16.470: INFO: (11) /api/v1/namespaces/proxy-1693/pods/http:proxy-service-dzw6x-dpvtz:162/proxy/: bar (200; 5.773336ms)
Oct  6 21:05:16.470: INFO: (11) /api/v1/namespaces/proxy-1693/pods/https:proxy-service-dzw6x-dpvtz:460/proxy/: tls baz (200; 5.853771ms)
Oct  6 21:05:16.470: INFO: (11) /api/v1/namespaces/proxy-1693/services/http:proxy-service-dzw6x:portname2/proxy/: bar (200; 6.186154ms)
Oct  6 21:05:16.470: INFO: (11) /api/v1/namespaces/proxy-1693/pods/http:proxy-service-dzw6x-dpvtz:160/proxy/: foo (200; 6.203395ms)
Oct  6 21:05:16.470: INFO: (11) /api/v1/namespaces/proxy-1693/services/proxy-service-dzw6x:portname2/proxy/: bar (200; 6.433306ms)
Oct  6 21:05:16.471: INFO: (11) /api/v1/namespaces/proxy-1693/services/https:proxy-service-dzw6x:tlsportname2/proxy/: tls qux (200; 6.546678ms)
Oct  6 21:05:16.474: INFO: (12) /api/v1/namespaces/proxy-1693/pods/http:proxy-service-dzw6x-dpvtz:1080/proxy/: ... (200; 3.027129ms)
Oct  6 21:05:16.474: INFO: (12) /api/v1/namespaces/proxy-1693/pods/proxy-service-dzw6x-dpvtz/proxy/: test (200; 3.475593ms)
Oct  6 21:05:16.475: INFO: (12) /api/v1/namespaces/proxy-1693/services/http:proxy-service-dzw6x:portname1/proxy/: foo (200; 3.857273ms)
Oct  6 21:05:16.475: INFO: (12) /api/v1/namespaces/proxy-1693/services/http:proxy-service-dzw6x:portname2/proxy/: bar (200; 4.180332ms)
Oct  6 21:05:16.475: INFO: (12) /api/v1/namespaces/proxy-1693/pods/https:proxy-service-dzw6x-dpvtz:443/proxy/: test<... (200; 6.450428ms)
Oct  6 21:05:16.478: INFO: (12) /api/v1/namespaces/proxy-1693/services/proxy-service-dzw6x:portname1/proxy/: foo (200; 6.586415ms)
Oct  6 21:05:16.478: INFO: (12) /api/v1/namespaces/proxy-1693/pods/https:proxy-service-dzw6x-dpvtz:460/proxy/: tls baz (200; 6.608934ms)
Oct  6 21:05:16.481: INFO: (13) /api/v1/namespaces/proxy-1693/pods/proxy-service-dzw6x-dpvtz/proxy/: test (200; 3.314873ms)
Oct  6 21:05:16.481: INFO: (13) /api/v1/namespaces/proxy-1693/pods/https:proxy-service-dzw6x-dpvtz:462/proxy/: tls qux (200; 3.479358ms)
Oct  6 21:05:16.482: INFO: (13) /api/v1/namespaces/proxy-1693/pods/http:proxy-service-dzw6x-dpvtz:162/proxy/: bar (200; 3.806741ms)
Oct  6 21:05:16.482: INFO: (13) /api/v1/namespaces/proxy-1693/pods/https:proxy-service-dzw6x-dpvtz:460/proxy/: tls baz (200; 3.720354ms)
Oct  6 21:05:16.482: INFO: (13) /api/v1/namespaces/proxy-1693/pods/http:proxy-service-dzw6x-dpvtz:160/proxy/: foo (200; 3.688413ms)
Oct  6 21:05:16.483: INFO: (13) /api/v1/namespaces/proxy-1693/pods/proxy-service-dzw6x-dpvtz:1080/proxy/: test<... (200; 4.610028ms)
Oct  6 21:05:16.483: INFO: (13) /api/v1/namespaces/proxy-1693/services/https:proxy-service-dzw6x:tlsportname2/proxy/: tls qux (200; 4.62396ms)
Oct  6 21:05:16.483: INFO: (13) /api/v1/namespaces/proxy-1693/pods/proxy-service-dzw6x-dpvtz:162/proxy/: bar (200; 5.215925ms)
Oct  6 21:05:16.483: INFO: (13) /api/v1/namespaces/proxy-1693/services/proxy-service-dzw6x:portname1/proxy/: foo (200; 5.294245ms)
Oct  6 21:05:16.483: INFO: (13) /api/v1/namespaces/proxy-1693/pods/http:proxy-service-dzw6x-dpvtz:1080/proxy/: ... (200; 5.329732ms)
Oct  6 21:05:16.484: INFO: (13) /api/v1/namespaces/proxy-1693/services/https:proxy-service-dzw6x:tlsportname1/proxy/: tls baz (200; 5.708695ms)
Oct  6 21:05:16.484: INFO: (13) /api/v1/namespaces/proxy-1693/pods/proxy-service-dzw6x-dpvtz:160/proxy/: foo (200; 5.534443ms)
Oct  6 21:05:16.484: INFO: (13) /api/v1/namespaces/proxy-1693/pods/https:proxy-service-dzw6x-dpvtz:443/proxy/: ... (200; 3.97887ms)
Oct  6 21:05:16.489: INFO: (14) /api/v1/namespaces/proxy-1693/pods/proxy-service-dzw6x-dpvtz/proxy/: test (200; 4.145673ms)
Oct  6 21:05:16.489: INFO: (14) /api/v1/namespaces/proxy-1693/pods/http:proxy-service-dzw6x-dpvtz:162/proxy/: bar (200; 4.207712ms)
Oct  6 21:05:16.489: INFO: (14) /api/v1/namespaces/proxy-1693/services/proxy-service-dzw6x:portname2/proxy/: bar (200; 4.174084ms)
Oct  6 21:05:16.489: INFO: (14) /api/v1/namespaces/proxy-1693/pods/https:proxy-service-dzw6x-dpvtz:460/proxy/: tls baz (200; 4.55966ms)
Oct  6 21:05:16.490: INFO: (14) /api/v1/namespaces/proxy-1693/pods/proxy-service-dzw6x-dpvtz:162/proxy/: bar (200; 4.576928ms)
Oct  6 21:05:16.490: INFO: (14) /api/v1/namespaces/proxy-1693/services/http:proxy-service-dzw6x:portname1/proxy/: foo (200; 4.99666ms)
Oct  6 21:05:16.490: INFO: (14) /api/v1/namespaces/proxy-1693/pods/proxy-service-dzw6x-dpvtz:160/proxy/: foo (200; 5.118326ms)
Oct  6 21:05:16.490: INFO: (14) /api/v1/namespaces/proxy-1693/pods/proxy-service-dzw6x-dpvtz:1080/proxy/: test<... (200; 4.934295ms)
Oct  6 21:05:16.490: INFO: (14) /api/v1/namespaces/proxy-1693/pods/https:proxy-service-dzw6x-dpvtz:462/proxy/: tls qux (200; 5.259482ms)
Oct  6 21:05:16.490: INFO: (14) /api/v1/namespaces/proxy-1693/services/proxy-service-dzw6x:portname1/proxy/: foo (200; 5.169064ms)
Oct  6 21:05:16.490: INFO: (14) /api/v1/namespaces/proxy-1693/pods/http:proxy-service-dzw6x-dpvtz:160/proxy/: foo (200; 5.367186ms)
Oct  6 21:05:16.491: INFO: (14) /api/v1/namespaces/proxy-1693/services/http:proxy-service-dzw6x:portname2/proxy/: bar (200; 5.452664ms)
Oct  6 21:05:16.491: INFO: (14) /api/v1/namespaces/proxy-1693/services/https:proxy-service-dzw6x:tlsportname2/proxy/: tls qux (200; 6.064197ms)
Oct  6 21:05:16.491: INFO: (14) /api/v1/namespaces/proxy-1693/services/https:proxy-service-dzw6x:tlsportname1/proxy/: tls baz (200; 5.77597ms)
Oct  6 21:05:16.495: INFO: (15) /api/v1/namespaces/proxy-1693/pods/https:proxy-service-dzw6x-dpvtz:460/proxy/: tls baz (200; 3.15995ms)
Oct  6 21:05:16.495: INFO: (15) /api/v1/namespaces/proxy-1693/pods/proxy-service-dzw6x-dpvtz:160/proxy/: foo (200; 3.898989ms)
Oct  6 21:05:16.495: INFO: (15) /api/v1/namespaces/proxy-1693/services/proxy-service-dzw6x:portname1/proxy/: foo (200; 4.095465ms)
Oct  6 21:05:16.495: INFO: (15) /api/v1/namespaces/proxy-1693/pods/proxy-service-dzw6x-dpvtz:1080/proxy/: test<... (200; 3.990815ms)
Oct  6 21:05:16.496: INFO: (15) /api/v1/namespaces/proxy-1693/pods/proxy-service-dzw6x-dpvtz/proxy/: test (200; 4.070763ms)
Oct  6 21:05:16.496: INFO: (15) /api/v1/namespaces/proxy-1693/pods/proxy-service-dzw6x-dpvtz:162/proxy/: bar (200; 4.786065ms)
Oct  6 21:05:16.496: INFO: (15) /api/v1/namespaces/proxy-1693/services/proxy-service-dzw6x:portname2/proxy/: bar (200; 4.946096ms)
Oct  6 21:05:16.496: INFO: (15) /api/v1/namespaces/proxy-1693/pods/http:proxy-service-dzw6x-dpvtz:160/proxy/: foo (200; 4.960531ms)
Oct  6 21:05:16.497: INFO: (15) /api/v1/namespaces/proxy-1693/pods/https:proxy-service-dzw6x-dpvtz:443/proxy/: ... (200; 5.65543ms)
Oct  6 21:05:16.497: INFO: (15) /api/v1/namespaces/proxy-1693/services/https:proxy-service-dzw6x:tlsportname2/proxy/: tls qux (200; 5.849871ms)
Oct  6 21:05:16.497: INFO: (15) /api/v1/namespaces/proxy-1693/services/http:proxy-service-dzw6x:portname2/proxy/: bar (200; 5.944558ms)
Oct  6 21:05:16.497: INFO: (15) /api/v1/namespaces/proxy-1693/pods/http:proxy-service-dzw6x-dpvtz:162/proxy/: bar (200; 6.02135ms)
Oct  6 21:05:16.498: INFO: (15) /api/v1/namespaces/proxy-1693/pods/https:proxy-service-dzw6x-dpvtz:462/proxy/: tls qux (200; 6.370706ms)
Oct  6 21:05:16.498: INFO: (15) /api/v1/namespaces/proxy-1693/services/https:proxy-service-dzw6x:tlsportname1/proxy/: tls baz (200; 6.620167ms)
Oct  6 21:05:16.501: INFO: (16) /api/v1/namespaces/proxy-1693/pods/proxy-service-dzw6x-dpvtz:1080/proxy/: test<... (200; 2.732177ms)
Oct  6 21:05:16.503: INFO: (16) /api/v1/namespaces/proxy-1693/services/http:proxy-service-dzw6x:portname1/proxy/: foo (200; 4.586056ms)
Oct  6 21:05:16.503: INFO: (16) /api/v1/namespaces/proxy-1693/pods/http:proxy-service-dzw6x-dpvtz:1080/proxy/: ... (200; 4.410499ms)
Oct  6 21:05:16.503: INFO: (16) /api/v1/namespaces/proxy-1693/pods/proxy-service-dzw6x-dpvtz/proxy/: test (200; 4.735169ms)
Oct  6 21:05:16.503: INFO: (16) /api/v1/namespaces/proxy-1693/pods/proxy-service-dzw6x-dpvtz:160/proxy/: foo (200; 4.591958ms)
Oct  6 21:05:16.504: INFO: (16) /api/v1/namespaces/proxy-1693/services/https:proxy-service-dzw6x:tlsportname1/proxy/: tls baz (200; 5.362899ms)
Oct  6 21:05:16.504: INFO: (16) /api/v1/namespaces/proxy-1693/pods/https:proxy-service-dzw6x-dpvtz:460/proxy/: tls baz (200; 5.556461ms)
Oct  6 21:05:16.504: INFO: (16) /api/v1/namespaces/proxy-1693/pods/http:proxy-service-dzw6x-dpvtz:160/proxy/: foo (200; 5.412765ms)
Oct  6 21:05:16.504: INFO: (16) /api/v1/namespaces/proxy-1693/pods/proxy-service-dzw6x-dpvtz:162/proxy/: bar (200; 5.669725ms)
Oct  6 21:05:16.504: INFO: (16) /api/v1/namespaces/proxy-1693/pods/http:proxy-service-dzw6x-dpvtz:162/proxy/: bar (200; 5.655782ms)
Oct  6 21:05:16.504: INFO: (16) /api/v1/namespaces/proxy-1693/pods/https:proxy-service-dzw6x-dpvtz:443/proxy/: ... (200; 2.656461ms)
Oct  6 21:05:16.509: INFO: (17) /api/v1/namespaces/proxy-1693/pods/http:proxy-service-dzw6x-dpvtz:160/proxy/: foo (200; 3.677354ms)
Oct  6 21:05:16.509: INFO: (17) /api/v1/namespaces/proxy-1693/pods/http:proxy-service-dzw6x-dpvtz:162/proxy/: bar (200; 3.472608ms)
Oct  6 21:05:16.509: INFO: (17) /api/v1/namespaces/proxy-1693/pods/https:proxy-service-dzw6x-dpvtz:460/proxy/: tls baz (200; 3.70855ms)
Oct  6 21:05:16.509: INFO: (17) /api/v1/namespaces/proxy-1693/pods/proxy-service-dzw6x-dpvtz:162/proxy/: bar (200; 4.065084ms)
Oct  6 21:05:16.509: INFO: (17) /api/v1/namespaces/proxy-1693/pods/https:proxy-service-dzw6x-dpvtz:443/proxy/: test<... (200; 5.981555ms)
Oct  6 21:05:16.511: INFO: (17) /api/v1/namespaces/proxy-1693/services/proxy-service-dzw6x:portname1/proxy/: foo (200; 6.321766ms)
Oct  6 21:05:16.511: INFO: (17) /api/v1/namespaces/proxy-1693/services/https:proxy-service-dzw6x:tlsportname1/proxy/: tls baz (200; 6.265699ms)
Oct  6 21:05:16.511: INFO: (17) /api/v1/namespaces/proxy-1693/pods/proxy-service-dzw6x-dpvtz/proxy/: test (200; 6.180513ms)
Oct  6 21:05:16.516: INFO: (18) /api/v1/namespaces/proxy-1693/services/http:proxy-service-dzw6x:portname2/proxy/: bar (200; 4.522366ms)
Oct  6 21:05:16.516: INFO: (18) /api/v1/namespaces/proxy-1693/pods/proxy-service-dzw6x-dpvtz/proxy/: test (200; 4.657937ms)
Oct  6 21:05:16.516: INFO: (18) /api/v1/namespaces/proxy-1693/pods/https:proxy-service-dzw6x-dpvtz:460/proxy/: tls baz (200; 4.871145ms)
Oct  6 21:05:16.517: INFO: (18) /api/v1/namespaces/proxy-1693/pods/https:proxy-service-dzw6x-dpvtz:443/proxy/: ... (200; 5.215533ms)
Oct  6 21:05:16.517: INFO: (18) /api/v1/namespaces/proxy-1693/pods/proxy-service-dzw6x-dpvtz:1080/proxy/: test<... (200; 5.228531ms)
Oct  6 21:05:16.518: INFO: (18) /api/v1/namespaces/proxy-1693/pods/https:proxy-service-dzw6x-dpvtz:462/proxy/: tls qux (200; 5.991627ms)
Oct  6 21:05:16.518: INFO: (18) /api/v1/namespaces/proxy-1693/services/proxy-service-dzw6x:portname1/proxy/: foo (200; 6.429638ms)
Oct  6 21:05:16.518: INFO: (18) /api/v1/namespaces/proxy-1693/pods/http:proxy-service-dzw6x-dpvtz:162/proxy/: bar (200; 6.394457ms)
Oct  6 21:05:16.518: INFO: (18) /api/v1/namespaces/proxy-1693/services/http:proxy-service-dzw6x:portname1/proxy/: foo (200; 6.444703ms)
Oct  6 21:05:16.518: INFO: (18) /api/v1/namespaces/proxy-1693/pods/proxy-service-dzw6x-dpvtz:162/proxy/: bar (200; 6.402689ms)
Oct  6 21:05:16.518: INFO: (18) /api/v1/namespaces/proxy-1693/pods/proxy-service-dzw6x-dpvtz:160/proxy/: foo (200; 6.437399ms)
Oct  6 21:05:16.518: INFO: (18) /api/v1/namespaces/proxy-1693/services/proxy-service-dzw6x:portname2/proxy/: bar (200; 6.564325ms)
Oct  6 21:05:16.518: INFO: (18) /api/v1/namespaces/proxy-1693/services/https:proxy-service-dzw6x:tlsportname1/proxy/: tls baz (200; 6.60453ms)
Oct  6 21:05:16.522: INFO: (19) /api/v1/namespaces/proxy-1693/pods/https:proxy-service-dzw6x-dpvtz:443/proxy/: ... (200; 5.963314ms)
Oct  6 21:05:16.525: INFO: (19) /api/v1/namespaces/proxy-1693/pods/proxy-service-dzw6x-dpvtz:160/proxy/: foo (200; 6.209339ms)
Oct  6 21:05:16.525: INFO: (19) /api/v1/namespaces/proxy-1693/services/https:proxy-service-dzw6x:tlsportname1/proxy/: tls baz (200; 6.503377ms)
Oct  6 21:05:16.525: INFO: (19) /api/v1/namespaces/proxy-1693/pods/https:proxy-service-dzw6x-dpvtz:460/proxy/: tls baz (200; 6.365307ms)
Oct  6 21:05:16.525: INFO: (19) /api/v1/namespaces/proxy-1693/services/proxy-service-dzw6x:portname2/proxy/: bar (200; 6.302811ms)
Oct  6 21:05:16.525: INFO: (19) /api/v1/namespaces/proxy-1693/services/proxy-service-dzw6x:portname1/proxy/: foo (200; 6.698069ms)
Oct  6 21:05:16.525: INFO: (19) /api/v1/namespaces/proxy-1693/pods/proxy-service-dzw6x-dpvtz/proxy/: test (200; 6.436702ms)
Oct  6 21:05:16.525: INFO: (19) /api/v1/namespaces/proxy-1693/services/https:proxy-service-dzw6x:tlsportname2/proxy/: tls qux (200; 6.539867ms)
Oct  6 21:05:16.525: INFO: (19) /api/v1/namespaces/proxy-1693/pods/http:proxy-service-dzw6x-dpvtz:162/proxy/: bar (200; 6.791511ms)
Oct  6 21:05:16.525: INFO: (19) /api/v1/namespaces/proxy-1693/pods/proxy-service-dzw6x-dpvtz:1080/proxy/: test<... (200; 6.836892ms)
Oct  6 21:05:16.525: INFO: (19) /api/v1/namespaces/proxy-1693/pods/proxy-service-dzw6x-dpvtz:162/proxy/: bar (200; 6.591707ms)
Oct  6 21:05:16.525: INFO: (19) /api/v1/namespaces/proxy-1693/pods/https:proxy-service-dzw6x-dpvtz:462/proxy/: tls qux (200; 6.639024ms)
Oct  6 21:05:16.526: INFO: (19) /api/v1/namespaces/proxy-1693/services/http:proxy-service-dzw6x:portname1/proxy/: foo (200; 6.648558ms)
STEP: deleting ReplicationController proxy-service-dzw6x in namespace proxy-1693, will wait for the garbage collector to delete the pods
Oct  6 21:05:16.588: INFO: Deleting ReplicationController proxy-service-dzw6x took: 8.831604ms
Oct  6 21:05:16.889: INFO: Terminating ReplicationController proxy-service-dzw6x pods took: 300.884266ms
[AfterEach] version v1
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 21:05:19.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-1693" for this suite.

• [SLOW TEST:11.788 seconds]
[sig-network] Proxy
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:57
    should proxy through a service and a pod  [Conformance]
    /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","total":278,"completed":172,"skipped":2994,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 21:05:19.910: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Oct  6 21:05:28.074: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Oct  6 21:05:28.096: INFO: Pod pod-with-prestop-exec-hook still exists
Oct  6 21:05:30.097: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Oct  6 21:05:30.104: INFO: Pod pod-with-prestop-exec-hook still exists
Oct  6 21:05:32.096: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Oct  6 21:05:32.103: INFO: Pod pod-with-prestop-exec-hook still exists
Oct  6 21:05:34.097: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Oct  6 21:05:34.104: INFO: Pod pod-with-prestop-exec-hook still exists
Oct  6 21:05:36.097: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Oct  6 21:05:36.104: INFO: Pod pod-with-prestop-exec-hook still exists
Oct  6 21:05:38.097: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Oct  6 21:05:38.103: INFO: Pod pod-with-prestop-exec-hook still exists
Oct  6 21:05:40.097: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Oct  6 21:05:40.104: INFO: Pod pod-with-prestop-exec-hook still exists
Oct  6 21:05:42.097: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Oct  6 21:05:42.103: INFO: Pod pod-with-prestop-exec-hook still exists
Oct  6 21:05:44.097: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Oct  6 21:05:44.104: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 21:05:44.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-8919" for this suite.

• [SLOW TEST:24.226 seconds]
[k8s.io] Container Lifecycle Hook
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when create a pod with lifecycle hook
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":278,"completed":173,"skipped":3008,"failed":0}
SSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 21:05:44.138: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name projected-secret-test-3a84aad5-13ea-49d2-8a4e-36006a036c72
STEP: Creating a pod to test consume secrets
Oct  6 21:05:44.240: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d059a406-cb98-41f8-acf3-d3546403cba7" in namespace "projected-1534" to be "success or failure"
Oct  6 21:05:44.252: INFO: Pod "pod-projected-secrets-d059a406-cb98-41f8-acf3-d3546403cba7": Phase="Pending", Reason="", readiness=false. Elapsed: 11.764548ms
Oct  6 21:05:46.259: INFO: Pod "pod-projected-secrets-d059a406-cb98-41f8-acf3-d3546403cba7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018851493s
Oct  6 21:05:48.265: INFO: Pod "pod-projected-secrets-d059a406-cb98-41f8-acf3-d3546403cba7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025375748s
STEP: Saw pod success
Oct  6 21:05:48.266: INFO: Pod "pod-projected-secrets-d059a406-cb98-41f8-acf3-d3546403cba7" satisfied condition "success or failure"
Oct  6 21:05:48.271: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-d059a406-cb98-41f8-acf3-d3546403cba7 container secret-volume-test: 
STEP: delete the pod
Oct  6 21:05:48.298: INFO: Waiting for pod pod-projected-secrets-d059a406-cb98-41f8-acf3-d3546403cba7 to disappear
Oct  6 21:05:48.313: INFO: Pod pod-projected-secrets-d059a406-cb98-41f8-acf3-d3546403cba7 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 21:05:48.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1534" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":174,"skipped":3011,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 21:05:48.328: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3408.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-3408.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3408.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-3408.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3408.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-3408.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3408.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-3408.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3408.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-3408.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3408.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-3408.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3408.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 160.229.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.229.160_udp@PTR;check="$$(dig +tcp +noall +answer +search 160.229.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.229.160_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3408.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-3408.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3408.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-3408.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3408.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-3408.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3408.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-3408.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3408.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-3408.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3408.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-3408.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3408.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 160.229.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.229.160_udp@PTR;check="$$(dig +tcp +noall +answer +search 160.229.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.229.160_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Oct  6 21:05:56.664: INFO: Unable to read wheezy_udp@dns-test-service.dns-3408.svc.cluster.local from pod dns-3408/dns-test-9538e1bd-3861-4281-afe8-099fbb140778: the server could not find the requested resource (get pods dns-test-9538e1bd-3861-4281-afe8-099fbb140778)
Oct  6 21:05:56.668: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3408.svc.cluster.local from pod dns-3408/dns-test-9538e1bd-3861-4281-afe8-099fbb140778: the server could not find the requested resource (get pods dns-test-9538e1bd-3861-4281-afe8-099fbb140778)
Oct  6 21:05:56.672: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3408.svc.cluster.local from pod dns-3408/dns-test-9538e1bd-3861-4281-afe8-099fbb140778: the server could not find the requested resource (get pods dns-test-9538e1bd-3861-4281-afe8-099fbb140778)
Oct  6 21:05:56.675: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3408.svc.cluster.local from pod dns-3408/dns-test-9538e1bd-3861-4281-afe8-099fbb140778: the server could not find the requested resource (get pods dns-test-9538e1bd-3861-4281-afe8-099fbb140778)
Oct  6 21:05:56.703: INFO: Unable to read jessie_udp@dns-test-service.dns-3408.svc.cluster.local from pod dns-3408/dns-test-9538e1bd-3861-4281-afe8-099fbb140778: the server could not find the requested resource (get pods dns-test-9538e1bd-3861-4281-afe8-099fbb140778)
Oct  6 21:05:56.707: INFO: Unable to read jessie_tcp@dns-test-service.dns-3408.svc.cluster.local from pod dns-3408/dns-test-9538e1bd-3861-4281-afe8-099fbb140778: the server could not find the requested resource (get pods dns-test-9538e1bd-3861-4281-afe8-099fbb140778)
Oct  6 21:05:56.711: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3408.svc.cluster.local from pod dns-3408/dns-test-9538e1bd-3861-4281-afe8-099fbb140778: the server could not find the requested resource (get pods dns-test-9538e1bd-3861-4281-afe8-099fbb140778)
Oct  6 21:05:56.715: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3408.svc.cluster.local from pod dns-3408/dns-test-9538e1bd-3861-4281-afe8-099fbb140778: the server could not find the requested resource (get pods dns-test-9538e1bd-3861-4281-afe8-099fbb140778)
Oct  6 21:05:56.739: INFO: Lookups using dns-3408/dns-test-9538e1bd-3861-4281-afe8-099fbb140778 failed for: [wheezy_udp@dns-test-service.dns-3408.svc.cluster.local wheezy_tcp@dns-test-service.dns-3408.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3408.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3408.svc.cluster.local jessie_udp@dns-test-service.dns-3408.svc.cluster.local jessie_tcp@dns-test-service.dns-3408.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3408.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3408.svc.cluster.local]

Oct  6 21:06:01.748: INFO: Unable to read wheezy_udp@dns-test-service.dns-3408.svc.cluster.local from pod dns-3408/dns-test-9538e1bd-3861-4281-afe8-099fbb140778: the server could not find the requested resource (get pods dns-test-9538e1bd-3861-4281-afe8-099fbb140778)
Oct  6 21:06:01.753: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3408.svc.cluster.local from pod dns-3408/dns-test-9538e1bd-3861-4281-afe8-099fbb140778: the server could not find the requested resource (get pods dns-test-9538e1bd-3861-4281-afe8-099fbb140778)
Oct  6 21:06:01.756: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3408.svc.cluster.local from pod dns-3408/dns-test-9538e1bd-3861-4281-afe8-099fbb140778: the server could not find the requested resource (get pods dns-test-9538e1bd-3861-4281-afe8-099fbb140778)
Oct  6 21:06:01.760: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3408.svc.cluster.local from pod dns-3408/dns-test-9538e1bd-3861-4281-afe8-099fbb140778: the server could not find the requested resource (get pods dns-test-9538e1bd-3861-4281-afe8-099fbb140778)
Oct  6 21:06:01.788: INFO: Unable to read jessie_udp@dns-test-service.dns-3408.svc.cluster.local from pod dns-3408/dns-test-9538e1bd-3861-4281-afe8-099fbb140778: the server could not find the requested resource (get pods dns-test-9538e1bd-3861-4281-afe8-099fbb140778)
Oct  6 21:06:01.792: INFO: Unable to read jessie_tcp@dns-test-service.dns-3408.svc.cluster.local from pod dns-3408/dns-test-9538e1bd-3861-4281-afe8-099fbb140778: the server could not find the requested resource (get pods dns-test-9538e1bd-3861-4281-afe8-099fbb140778)
Oct  6 21:06:01.796: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3408.svc.cluster.local from pod dns-3408/dns-test-9538e1bd-3861-4281-afe8-099fbb140778: the server could not find the requested resource (get pods dns-test-9538e1bd-3861-4281-afe8-099fbb140778)
Oct  6 21:06:01.799: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3408.svc.cluster.local from pod dns-3408/dns-test-9538e1bd-3861-4281-afe8-099fbb140778: the server could not find the requested resource (get pods dns-test-9538e1bd-3861-4281-afe8-099fbb140778)
Oct  6 21:06:01.822: INFO: Lookups using dns-3408/dns-test-9538e1bd-3861-4281-afe8-099fbb140778 failed for: [wheezy_udp@dns-test-service.dns-3408.svc.cluster.local wheezy_tcp@dns-test-service.dns-3408.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3408.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3408.svc.cluster.local jessie_udp@dns-test-service.dns-3408.svc.cluster.local jessie_tcp@dns-test-service.dns-3408.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3408.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3408.svc.cluster.local]

Oct  6 21:06:06.747: INFO: Unable to read wheezy_udp@dns-test-service.dns-3408.svc.cluster.local from pod dns-3408/dns-test-9538e1bd-3861-4281-afe8-099fbb140778: the server could not find the requested resource (get pods dns-test-9538e1bd-3861-4281-afe8-099fbb140778)
Oct  6 21:06:06.752: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3408.svc.cluster.local from pod dns-3408/dns-test-9538e1bd-3861-4281-afe8-099fbb140778: the server could not find the requested resource (get pods dns-test-9538e1bd-3861-4281-afe8-099fbb140778)
Oct  6 21:06:06.756: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3408.svc.cluster.local from pod dns-3408/dns-test-9538e1bd-3861-4281-afe8-099fbb140778: the server could not find the requested resource (get pods dns-test-9538e1bd-3861-4281-afe8-099fbb140778)
Oct  6 21:06:06.760: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3408.svc.cluster.local from pod dns-3408/dns-test-9538e1bd-3861-4281-afe8-099fbb140778: the server could not find the requested resource (get pods dns-test-9538e1bd-3861-4281-afe8-099fbb140778)
Oct  6 21:06:06.786: INFO: Unable to read jessie_udp@dns-test-service.dns-3408.svc.cluster.local from pod dns-3408/dns-test-9538e1bd-3861-4281-afe8-099fbb140778: the server could not find the requested resource (get pods dns-test-9538e1bd-3861-4281-afe8-099fbb140778)
Oct  6 21:06:06.790: INFO: Unable to read jessie_tcp@dns-test-service.dns-3408.svc.cluster.local from pod dns-3408/dns-test-9538e1bd-3861-4281-afe8-099fbb140778: the server could not find the requested resource (get pods dns-test-9538e1bd-3861-4281-afe8-099fbb140778)
Oct  6 21:06:06.794: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3408.svc.cluster.local from pod dns-3408/dns-test-9538e1bd-3861-4281-afe8-099fbb140778: the server could not find the requested resource (get pods dns-test-9538e1bd-3861-4281-afe8-099fbb140778)
Oct  6 21:06:06.798: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3408.svc.cluster.local from pod dns-3408/dns-test-9538e1bd-3861-4281-afe8-099fbb140778: the server could not find the requested resource (get pods dns-test-9538e1bd-3861-4281-afe8-099fbb140778)
Oct  6 21:06:06.821: INFO: Lookups using dns-3408/dns-test-9538e1bd-3861-4281-afe8-099fbb140778 failed for: [wheezy_udp@dns-test-service.dns-3408.svc.cluster.local wheezy_tcp@dns-test-service.dns-3408.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3408.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3408.svc.cluster.local jessie_udp@dns-test-service.dns-3408.svc.cluster.local jessie_tcp@dns-test-service.dns-3408.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3408.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3408.svc.cluster.local]

Oct  6 21:06:11.747: INFO: Unable to read wheezy_udp@dns-test-service.dns-3408.svc.cluster.local from pod dns-3408/dns-test-9538e1bd-3861-4281-afe8-099fbb140778: the server could not find the requested resource (get pods dns-test-9538e1bd-3861-4281-afe8-099fbb140778)
Oct  6 21:06:11.751: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3408.svc.cluster.local from pod dns-3408/dns-test-9538e1bd-3861-4281-afe8-099fbb140778: the server could not find the requested resource (get pods dns-test-9538e1bd-3861-4281-afe8-099fbb140778)
Oct  6 21:06:11.756: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3408.svc.cluster.local from pod dns-3408/dns-test-9538e1bd-3861-4281-afe8-099fbb140778: the server could not find the requested resource (get pods dns-test-9538e1bd-3861-4281-afe8-099fbb140778)
Oct  6 21:06:11.760: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3408.svc.cluster.local from pod dns-3408/dns-test-9538e1bd-3861-4281-afe8-099fbb140778: the server could not find the requested resource (get pods dns-test-9538e1bd-3861-4281-afe8-099fbb140778)
Oct  6 21:06:11.795: INFO: Unable to read jessie_udp@dns-test-service.dns-3408.svc.cluster.local from pod dns-3408/dns-test-9538e1bd-3861-4281-afe8-099fbb140778: the server could not find the requested resource (get pods dns-test-9538e1bd-3861-4281-afe8-099fbb140778)
Oct  6 21:06:11.799: INFO: Unable to read jessie_tcp@dns-test-service.dns-3408.svc.cluster.local from pod dns-3408/dns-test-9538e1bd-3861-4281-afe8-099fbb140778: the server could not find the requested resource (get pods dns-test-9538e1bd-3861-4281-afe8-099fbb140778)
Oct  6 21:06:11.801: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3408.svc.cluster.local from pod dns-3408/dns-test-9538e1bd-3861-4281-afe8-099fbb140778: the server could not find the requested resource (get pods dns-test-9538e1bd-3861-4281-afe8-099fbb140778)
Oct  6 21:06:11.804: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3408.svc.cluster.local from pod dns-3408/dns-test-9538e1bd-3861-4281-afe8-099fbb140778: the server could not find the requested resource (get pods dns-test-9538e1bd-3861-4281-afe8-099fbb140778)
Oct  6 21:06:11.822: INFO: Lookups using dns-3408/dns-test-9538e1bd-3861-4281-afe8-099fbb140778 failed for: [wheezy_udp@dns-test-service.dns-3408.svc.cluster.local wheezy_tcp@dns-test-service.dns-3408.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3408.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3408.svc.cluster.local jessie_udp@dns-test-service.dns-3408.svc.cluster.local jessie_tcp@dns-test-service.dns-3408.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3408.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3408.svc.cluster.local]

Oct  6 21:06:16.747: INFO: Unable to read wheezy_udp@dns-test-service.dns-3408.svc.cluster.local from pod dns-3408/dns-test-9538e1bd-3861-4281-afe8-099fbb140778: the server could not find the requested resource (get pods dns-test-9538e1bd-3861-4281-afe8-099fbb140778)
Oct  6 21:06:16.752: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3408.svc.cluster.local from pod dns-3408/dns-test-9538e1bd-3861-4281-afe8-099fbb140778: the server could not find the requested resource (get pods dns-test-9538e1bd-3861-4281-afe8-099fbb140778)
Oct  6 21:06:16.756: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3408.svc.cluster.local from pod dns-3408/dns-test-9538e1bd-3861-4281-afe8-099fbb140778: the server could not find the requested resource (get pods dns-test-9538e1bd-3861-4281-afe8-099fbb140778)
Oct  6 21:06:16.759: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3408.svc.cluster.local from pod dns-3408/dns-test-9538e1bd-3861-4281-afe8-099fbb140778: the server could not find the requested resource (get pods dns-test-9538e1bd-3861-4281-afe8-099fbb140778)
Oct  6 21:06:16.786: INFO: Unable to read jessie_udp@dns-test-service.dns-3408.svc.cluster.local from pod dns-3408/dns-test-9538e1bd-3861-4281-afe8-099fbb140778: the server could not find the requested resource (get pods dns-test-9538e1bd-3861-4281-afe8-099fbb140778)
Oct  6 21:06:16.790: INFO: Unable to read jessie_tcp@dns-test-service.dns-3408.svc.cluster.local from pod dns-3408/dns-test-9538e1bd-3861-4281-afe8-099fbb140778: the server could not find the requested resource (get pods dns-test-9538e1bd-3861-4281-afe8-099fbb140778)
Oct  6 21:06:16.794: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3408.svc.cluster.local from pod dns-3408/dns-test-9538e1bd-3861-4281-afe8-099fbb140778: the server could not find the requested resource (get pods dns-test-9538e1bd-3861-4281-afe8-099fbb140778)
Oct  6 21:06:16.798: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3408.svc.cluster.local from pod dns-3408/dns-test-9538e1bd-3861-4281-afe8-099fbb140778: the server could not find the requested resource (get pods dns-test-9538e1bd-3861-4281-afe8-099fbb140778)
Oct  6 21:06:16.823: INFO: Lookups using dns-3408/dns-test-9538e1bd-3861-4281-afe8-099fbb140778 failed for: [wheezy_udp@dns-test-service.dns-3408.svc.cluster.local wheezy_tcp@dns-test-service.dns-3408.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3408.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3408.svc.cluster.local jessie_udp@dns-test-service.dns-3408.svc.cluster.local jessie_tcp@dns-test-service.dns-3408.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3408.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3408.svc.cluster.local]

Oct  6 21:06:21.744: INFO: Unable to read wheezy_udp@dns-test-service.dns-3408.svc.cluster.local from pod dns-3408/dns-test-9538e1bd-3861-4281-afe8-099fbb140778: the server could not find the requested resource (get pods dns-test-9538e1bd-3861-4281-afe8-099fbb140778)
Oct  6 21:06:21.748: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3408.svc.cluster.local from pod dns-3408/dns-test-9538e1bd-3861-4281-afe8-099fbb140778: the server could not find the requested resource (get pods dns-test-9538e1bd-3861-4281-afe8-099fbb140778)
Oct  6 21:06:21.753: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3408.svc.cluster.local from pod dns-3408/dns-test-9538e1bd-3861-4281-afe8-099fbb140778: the server could not find the requested resource (get pods dns-test-9538e1bd-3861-4281-afe8-099fbb140778)
Oct  6 21:06:21.758: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3408.svc.cluster.local from pod dns-3408/dns-test-9538e1bd-3861-4281-afe8-099fbb140778: the server could not find the requested resource (get pods dns-test-9538e1bd-3861-4281-afe8-099fbb140778)
Oct  6 21:06:21.789: INFO: Unable to read jessie_udp@dns-test-service.dns-3408.svc.cluster.local from pod dns-3408/dns-test-9538e1bd-3861-4281-afe8-099fbb140778: the server could not find the requested resource (get pods dns-test-9538e1bd-3861-4281-afe8-099fbb140778)
Oct  6 21:06:21.793: INFO: Unable to read jessie_tcp@dns-test-service.dns-3408.svc.cluster.local from pod dns-3408/dns-test-9538e1bd-3861-4281-afe8-099fbb140778: the server could not find the requested resource (get pods dns-test-9538e1bd-3861-4281-afe8-099fbb140778)
Oct  6 21:06:21.797: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3408.svc.cluster.local from pod dns-3408/dns-test-9538e1bd-3861-4281-afe8-099fbb140778: the server could not find the requested resource (get pods dns-test-9538e1bd-3861-4281-afe8-099fbb140778)
Oct  6 21:06:21.801: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3408.svc.cluster.local from pod dns-3408/dns-test-9538e1bd-3861-4281-afe8-099fbb140778: the server could not find the requested resource (get pods dns-test-9538e1bd-3861-4281-afe8-099fbb140778)
Oct  6 21:06:21.828: INFO: Lookups using dns-3408/dns-test-9538e1bd-3861-4281-afe8-099fbb140778 failed for: [wheezy_udp@dns-test-service.dns-3408.svc.cluster.local wheezy_tcp@dns-test-service.dns-3408.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3408.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3408.svc.cluster.local jessie_udp@dns-test-service.dns-3408.svc.cluster.local jessie_tcp@dns-test-service.dns-3408.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3408.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3408.svc.cluster.local]

Oct  6 21:06:26.820: INFO: DNS probes using dns-3408/dns-test-9538e1bd-3861-4281-afe8-099fbb140778 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 21:06:27.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-3408" for this suite.

• [SLOW TEST:39.449 seconds]
[sig-network] DNS
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for services  [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for services  [Conformance]","total":278,"completed":175,"skipped":3025,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 21:06:27.780: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Oct  6 21:06:27.888: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7c2d741f-5ffa-49cc-a4e7-947a167fc4bd" in namespace "projected-8234" to be "success or failure"
Oct  6 21:06:27.904: INFO: Pod "downwardapi-volume-7c2d741f-5ffa-49cc-a4e7-947a167fc4bd": Phase="Pending", Reason="", readiness=false. Elapsed: 15.005832ms
Oct  6 21:06:29.909: INFO: Pod "downwardapi-volume-7c2d741f-5ffa-49cc-a4e7-947a167fc4bd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020256019s
Oct  6 21:06:31.915: INFO: Pod "downwardapi-volume-7c2d741f-5ffa-49cc-a4e7-947a167fc4bd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026815755s
STEP: Saw pod success
Oct  6 21:06:31.916: INFO: Pod "downwardapi-volume-7c2d741f-5ffa-49cc-a4e7-947a167fc4bd" satisfied condition "success or failure"
Oct  6 21:06:31.919: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-7c2d741f-5ffa-49cc-a4e7-947a167fc4bd container client-container: 
STEP: delete the pod
Oct  6 21:06:31.992: INFO: Waiting for pod downwardapi-volume-7c2d741f-5ffa-49cc-a4e7-947a167fc4bd to disappear
Oct  6 21:06:32.004: INFO: Pod downwardapi-volume-7c2d741f-5ffa-49cc-a4e7-947a167fc4bd no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 21:06:32.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8234" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":176,"skipped":3039,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] [sig-node] PreStop
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 21:06:32.020: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] [sig-node] PreStop
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:172
[It] should call prestop when killing a pod  [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating server pod server in namespace prestop-1842
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace prestop-1842
STEP: Deleting pre-stop pod
Oct  6 21:06:45.331: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 21:06:45.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "prestop-1842" for this suite.

• [SLOW TEST:13.340 seconds]
[k8s.io] [sig-node] PreStop
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should call prestop when killing a pod  [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod  [Conformance]","total":278,"completed":177,"skipped":3070,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 21:06:45.363: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-ca465db8-4408-4ebd-aa91-473ad874d734
STEP: Creating a pod to test consume configMaps
Oct  6 21:06:45.814: INFO: Waiting up to 5m0s for pod "pod-configmaps-63f286e6-f88c-47c5-90d2-1bac3f3243d2" in namespace "configmap-7040" to be "success or failure"
Oct  6 21:06:45.830: INFO: Pod "pod-configmaps-63f286e6-f88c-47c5-90d2-1bac3f3243d2": Phase="Pending", Reason="", readiness=false. Elapsed: 15.947968ms
Oct  6 21:06:47.837: INFO: Pod "pod-configmaps-63f286e6-f88c-47c5-90d2-1bac3f3243d2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022790125s
Oct  6 21:06:49.848: INFO: Pod "pod-configmaps-63f286e6-f88c-47c5-90d2-1bac3f3243d2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03321334s
STEP: Saw pod success
Oct  6 21:06:49.848: INFO: Pod "pod-configmaps-63f286e6-f88c-47c5-90d2-1bac3f3243d2" satisfied condition "success or failure"
Oct  6 21:06:49.852: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-63f286e6-f88c-47c5-90d2-1bac3f3243d2 container configmap-volume-test: 
STEP: delete the pod
Oct  6 21:06:50.066: INFO: Waiting for pod pod-configmaps-63f286e6-f88c-47c5-90d2-1bac3f3243d2 to disappear
Oct  6 21:06:50.089: INFO: Pod pod-configmaps-63f286e6-f88c-47c5-90d2-1bac3f3243d2 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 21:06:50.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7040" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":178,"skipped":3092,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 21:06:50.105: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-bbfa0778-5a61-4aeb-9a6b-96f0f6b14b48
STEP: Creating a pod to test consume configMaps
Oct  6 21:06:50.324: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-bee65c33-cdf3-49f8-96ac-41891166faf3" in namespace "projected-996" to be "success or failure"
Oct  6 21:06:50.343: INFO: Pod "pod-projected-configmaps-bee65c33-cdf3-49f8-96ac-41891166faf3": Phase="Pending", Reason="", readiness=false. Elapsed: 19.116554ms
Oct  6 21:06:52.369: INFO: Pod "pod-projected-configmaps-bee65c33-cdf3-49f8-96ac-41891166faf3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04474128s
Oct  6 21:06:54.376: INFO: Pod "pod-projected-configmaps-bee65c33-cdf3-49f8-96ac-41891166faf3": Phase="Running", Reason="", readiness=true. Elapsed: 4.051819489s
Oct  6 21:06:56.384: INFO: Pod "pod-projected-configmaps-bee65c33-cdf3-49f8-96ac-41891166faf3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.060033357s
STEP: Saw pod success
Oct  6 21:06:56.385: INFO: Pod "pod-projected-configmaps-bee65c33-cdf3-49f8-96ac-41891166faf3" satisfied condition "success or failure"
Oct  6 21:06:56.396: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-bee65c33-cdf3-49f8-96ac-41891166faf3 container projected-configmap-volume-test: 
STEP: delete the pod
Oct  6 21:06:56.430: INFO: Waiting for pod pod-projected-configmaps-bee65c33-cdf3-49f8-96ac-41891166faf3 to disappear
Oct  6 21:06:56.442: INFO: Pod pod-projected-configmaps-bee65c33-cdf3-49f8-96ac-41891166faf3 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 21:06:56.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-996" for this suite.

• [SLOW TEST:6.351 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":179,"skipped":3102,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 21:06:56.458: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name s-test-opt-del-fb693976-fa40-4ec9-b593-bd00d5103f9d
STEP: Creating secret with name s-test-opt-upd-c977c852-af6b-4041-81aa-bffb2ceb2c4e
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-fb693976-fa40-4ec9-b593-bd00d5103f9d
STEP: Updating secret s-test-opt-upd-c977c852-af6b-4041-81aa-bffb2ceb2c4e
STEP: Creating secret with name s-test-opt-create-f4a187a9-bfb8-48d8-a0be-7a405ead9647
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 21:07:04.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7565" for this suite.

• [SLOW TEST:8.233 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":180,"skipped":3129,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] [sig-node] Events
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 21:07:04.694: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Oct  6 21:07:08.819: INFO: &Pod{ObjectMeta:{send-events-8cfbf590-e0e4-4ee1-955e-89fa02979b12  events-6943 /api/v1/namespaces/events-6943/pods/send-events-8cfbf590-e0e4-4ee1-955e-89fa02979b12 fe225f18-12c8-4a7c-9a61-3e4e1d323a39 3613770 0 2020-10-06 21:07:04 +0000 UTC   map[name:foo time:781435037] map[] [] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qccjx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qccjx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qccjx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 21:07:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 21:07:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 21:07:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 21:07:04 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.9,PodIP:10.244.2.11,StartTime:2020-10-06 21:07:04 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-10-06 21:07:07 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://0725565bbb75f87e8ee9df1e2c05bb3b2d004d07e2a837f25d4fec07425a9682,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.11,},},EphemeralContainerStatuses:[]ContainerStatus{},},}

STEP: checking for scheduler event about the pod
Oct  6 21:07:10.837: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Oct  6 21:07:12.845: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 21:07:12.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-6943" for this suite.

• [SLOW TEST:8.261 seconds]
[k8s.io] [sig-node] Events
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]","total":278,"completed":181,"skipped":3159,"failed":0}
SSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 21:07:12.955: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating the pod
Oct  6 21:07:17.765: INFO: Successfully updated pod "labelsupdatec7f0037d-30c0-4cef-b816-97d8c01365f3"
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 21:07:19.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8313" for this suite.

• [SLOW TEST:6.876 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":182,"skipped":3166,"failed":0}
SSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 21:07:19.833: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 21:07:23.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-2467" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":278,"completed":183,"skipped":3173,"failed":0}
SS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 21:07:23.952: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Oct  6 21:07:24.056: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Oct  6 21:07:24.071: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct  6 21:07:24.110: INFO: Number of nodes with available pods: 0
Oct  6 21:07:24.110: INFO: Node jerma-worker is running more than one daemon pod
Oct  6 21:07:25.122: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct  6 21:07:25.128: INFO: Number of nodes with available pods: 0
Oct  6 21:07:25.128: INFO: Node jerma-worker is running more than one daemon pod
Oct  6 21:07:26.358: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct  6 21:07:26.741: INFO: Number of nodes with available pods: 0
Oct  6 21:07:26.742: INFO: Node jerma-worker is running more than one daemon pod
Oct  6 21:07:27.125: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct  6 21:07:27.305: INFO: Number of nodes with available pods: 0
Oct  6 21:07:27.305: INFO: Node jerma-worker is running more than one daemon pod
Oct  6 21:07:28.122: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct  6 21:07:28.127: INFO: Number of nodes with available pods: 0
Oct  6 21:07:28.127: INFO: Node jerma-worker is running more than one daemon pod
Oct  6 21:07:29.121: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct  6 21:07:29.126: INFO: Number of nodes with available pods: 2
Oct  6 21:07:29.126: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Oct  6 21:07:29.197: INFO: Wrong image for pod: daemon-set-f8xsb. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Oct  6 21:07:29.197: INFO: Wrong image for pod: daemon-set-wm9tz. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Oct  6 21:07:29.247: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct  6 21:07:30.255: INFO: Wrong image for pod: daemon-set-f8xsb. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Oct  6 21:07:30.255: INFO: Wrong image for pod: daemon-set-wm9tz. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Oct  6 21:07:30.267: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct  6 21:07:31.255: INFO: Wrong image for pod: daemon-set-f8xsb. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Oct  6 21:07:31.255: INFO: Wrong image for pod: daemon-set-wm9tz. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Oct  6 21:07:31.264: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct  6 21:07:32.256: INFO: Wrong image for pod: daemon-set-f8xsb. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Oct  6 21:07:32.256: INFO: Pod daemon-set-f8xsb is not available
Oct  6 21:07:32.256: INFO: Wrong image for pod: daemon-set-wm9tz. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Oct  6 21:07:32.265: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct  6 21:07:33.294: INFO: Wrong image for pod: daemon-set-f8xsb. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Oct  6 21:07:33.294: INFO: Pod daemon-set-f8xsb is not available
Oct  6 21:07:33.294: INFO: Wrong image for pod: daemon-set-wm9tz. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Oct  6 21:07:33.305: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct  6 21:07:34.256: INFO: Wrong image for pod: daemon-set-f8xsb. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Oct  6 21:07:34.257: INFO: Pod daemon-set-f8xsb is not available
Oct  6 21:07:34.257: INFO: Wrong image for pod: daemon-set-wm9tz. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Oct  6 21:07:34.265: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct  6 21:07:35.257: INFO: Wrong image for pod: daemon-set-f8xsb. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Oct  6 21:07:35.257: INFO: Pod daemon-set-f8xsb is not available
Oct  6 21:07:35.257: INFO: Wrong image for pod: daemon-set-wm9tz. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Oct  6 21:07:35.265: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct  6 21:07:36.255: INFO: Wrong image for pod: daemon-set-f8xsb. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Oct  6 21:07:36.255: INFO: Pod daemon-set-f8xsb is not available
Oct  6 21:07:36.255: INFO: Wrong image for pod: daemon-set-wm9tz. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Oct  6 21:07:36.261: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct  6 21:07:37.255: INFO: Wrong image for pod: daemon-set-f8xsb. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Oct  6 21:07:37.255: INFO: Pod daemon-set-f8xsb is not available
Oct  6 21:07:37.256: INFO: Wrong image for pod: daemon-set-wm9tz. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Oct  6 21:07:37.263: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct  6 21:07:38.255: INFO: Wrong image for pod: daemon-set-f8xsb. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Oct  6 21:07:38.255: INFO: Pod daemon-set-f8xsb is not available
Oct  6 21:07:38.255: INFO: Wrong image for pod: daemon-set-wm9tz. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Oct  6 21:07:38.262: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct  6 21:07:39.274: INFO: Wrong image for pod: daemon-set-f8xsb. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Oct  6 21:07:39.274: INFO: Pod daemon-set-f8xsb is not available
Oct  6 21:07:39.274: INFO: Wrong image for pod: daemon-set-wm9tz. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Oct  6 21:07:39.284: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct  6 21:07:40.257: INFO: Wrong image for pod: daemon-set-f8xsb. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Oct  6 21:07:40.257: INFO: Pod daemon-set-f8xsb is not available
Oct  6 21:07:40.257: INFO: Wrong image for pod: daemon-set-wm9tz. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Oct  6 21:07:40.266: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct  6 21:07:41.256: INFO: Wrong image for pod: daemon-set-f8xsb. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Oct  6 21:07:41.256: INFO: Pod daemon-set-f8xsb is not available
Oct  6 21:07:41.256: INFO: Wrong image for pod: daemon-set-wm9tz. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Oct  6 21:07:41.265: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct  6 21:07:42.291: INFO: Wrong image for pod: daemon-set-f8xsb. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Oct  6 21:07:42.292: INFO: Pod daemon-set-f8xsb is not available
Oct  6 21:07:42.292: INFO: Wrong image for pod: daemon-set-wm9tz. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Oct  6 21:07:42.298: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct  6 21:07:43.280: INFO: Wrong image for pod: daemon-set-f8xsb. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Oct  6 21:07:43.281: INFO: Pod daemon-set-f8xsb is not available
Oct  6 21:07:43.281: INFO: Wrong image for pod: daemon-set-wm9tz. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Oct  6 21:07:43.304: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct  6 21:07:44.254: INFO: Pod daemon-set-cmk4l is not available
Oct  6 21:07:44.255: INFO: Wrong image for pod: daemon-set-wm9tz. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Oct  6 21:07:44.266: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct  6 21:07:45.255: INFO: Pod daemon-set-cmk4l is not available
Oct  6 21:07:45.256: INFO: Wrong image for pod: daemon-set-wm9tz. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Oct  6 21:07:45.262: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct  6 21:07:46.255: INFO: Pod daemon-set-cmk4l is not available
Oct  6 21:07:46.255: INFO: Wrong image for pod: daemon-set-wm9tz. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Oct  6 21:07:46.265: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct  6 21:07:47.254: INFO: Wrong image for pod: daemon-set-wm9tz. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Oct  6 21:07:47.263: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct  6 21:07:48.255: INFO: Wrong image for pod: daemon-set-wm9tz. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Oct  6 21:07:48.263: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct  6 21:07:49.256: INFO: Wrong image for pod: daemon-set-wm9tz. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Oct  6 21:07:49.256: INFO: Pod daemon-set-wm9tz is not available
Oct  6 21:07:49.266: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct  6 21:07:50.254: INFO: Wrong image for pod: daemon-set-wm9tz. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Oct  6 21:07:50.254: INFO: Pod daemon-set-wm9tz is not available
Oct  6 21:07:50.260: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct  6 21:07:51.256: INFO: Wrong image for pod: daemon-set-wm9tz. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Oct  6 21:07:51.256: INFO: Pod daemon-set-wm9tz is not available
Oct  6 21:07:51.266: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct  6 21:07:52.256: INFO: Wrong image for pod: daemon-set-wm9tz. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Oct  6 21:07:52.256: INFO: Pod daemon-set-wm9tz is not available
Oct  6 21:07:52.263: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct  6 21:07:53.256: INFO: Wrong image for pod: daemon-set-wm9tz. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Oct  6 21:07:53.257: INFO: Pod daemon-set-wm9tz is not available
Oct  6 21:07:53.267: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct  6 21:07:54.255: INFO: Wrong image for pod: daemon-set-wm9tz. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Oct  6 21:07:54.255: INFO: Pod daemon-set-wm9tz is not available
Oct  6 21:07:54.264: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct  6 21:07:55.257: INFO: Pod daemon-set-sqpqj is not available
Oct  6 21:07:55.265: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
STEP: Check that daemon pods are still running on every node of the cluster.
Oct  6 21:07:55.272: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct  6 21:07:55.276: INFO: Number of nodes with available pods: 1
Oct  6 21:07:55.276: INFO: Node jerma-worker is running more than one daemon pod
Oct  6 21:07:56.287: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct  6 21:07:56.292: INFO: Number of nodes with available pods: 1
Oct  6 21:07:56.292: INFO: Node jerma-worker is running more than one daemon pod
Oct  6 21:07:57.289: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct  6 21:07:57.296: INFO: Number of nodes with available pods: 1
Oct  6 21:07:57.296: INFO: Node jerma-worker is running more than one daemon pod
Oct  6 21:07:58.288: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct  6 21:07:58.294: INFO: Number of nodes with available pods: 2
Oct  6 21:07:58.295: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7604, will wait for the garbage collector to delete the pods
Oct  6 21:07:58.383: INFO: Deleting DaemonSet.extensions daemon-set took: 7.825907ms
Oct  6 21:07:58.684: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.713427ms
Oct  6 21:08:14.390: INFO: Number of nodes with available pods: 0
Oct  6 21:08:14.390: INFO: Number of running nodes: 0, number of available pods: 0
Oct  6 21:08:14.395: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7604/daemonsets","resourceVersion":"3614141"},"items":null}

Oct  6 21:08:14.400: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7604/pods","resourceVersion":"3614141"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 21:08:14.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-7604" for this suite.

• [SLOW TEST:50.484 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":278,"completed":184,"skipped":3175,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  should include custom resource definition resources in discovery documents [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 21:08:14.438: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] should include custom resource definition resources in discovery documents [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: fetching the /apis discovery document
STEP: finding the apiextensions.k8s.io API group in the /apis discovery document
STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document
STEP: fetching the /apis/apiextensions.k8s.io discovery document
STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document
STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document
STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 21:08:14.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-2788" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":278,"completed":185,"skipped":3182,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate pod and apply defaults after mutation [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 21:08:14.549: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Oct  6 21:08:18.518: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Oct  6 21:08:20.575: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737615298, loc:(*time.Location)(0x7271fa0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737615298, loc:(*time.Location)(0x7271fa0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737615298, loc:(*time.Location)(0x7271fa0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737615298, loc:(*time.Location)(0x7271fa0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Oct  6 21:08:22.610: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737615298, loc:(*time.Location)(0x7271fa0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737615298, loc:(*time.Location)(0x7271fa0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737615298, loc:(*time.Location)(0x7271fa0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737615298, loc:(*time.Location)(0x7271fa0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Oct  6 21:08:25.617: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate pod and apply defaults after mutation [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the mutating pod webhook via the AdmissionRegistration API
Oct  6 21:08:25.689: INFO: Waiting for webhook configuration to be ready...
STEP: create a pod that should be updated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 21:08:25.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-2460" for this suite.
STEP: Destroying namespace "webhook-2460-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:11.471 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate pod and apply defaults after mutation [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":278,"completed":186,"skipped":3197,"failed":0}
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  custom resource defaulting for requests and from storage works  [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 21:08:26.021: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] custom resource defaulting for requests and from storage works  [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Oct  6 21:08:26.093: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 21:08:27.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-1923" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works  [Conformance]","total":278,"completed":187,"skipped":3197,"failed":0}
SSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from NodePort to ExternalName [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 21:08:27.394: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to change the type from NodePort to ExternalName [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a service nodeport-service with the type=NodePort in namespace services-3614
STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service
STEP: creating service externalsvc in namespace services-3614
STEP: creating replication controller externalsvc in namespace services-3614
I1006 21:08:27.666377       7 runners.go:189] Created replication controller with name: externalsvc, namespace: services-3614, replica count: 2
I1006 21:08:30.717751       7 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1006 21:08:33.718579       7 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: changing the NodePort service to type=ExternalName
Oct  6 21:08:33.776: INFO: Creating new exec pod
Oct  6 21:08:37.794: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3614 execpodkk86q -- /bin/sh -x -c nslookup nodeport-service'
Oct  6 21:08:39.317: INFO: stderr: "I1006 21:08:39.132723    3116 log.go:172] (0x4000af6000) (0x40009f8000) Create stream\nI1006 21:08:39.135691    3116 log.go:172] (0x4000af6000) (0x40009f8000) Stream added, broadcasting: 1\nI1006 21:08:39.149115    3116 log.go:172] (0x4000af6000) Reply frame received for 1\nI1006 21:08:39.149697    3116 log.go:172] (0x4000af6000) (0x40009f80a0) Create stream\nI1006 21:08:39.149753    3116 log.go:172] (0x4000af6000) (0x40009f80a0) Stream added, broadcasting: 3\nI1006 21:08:39.151748    3116 log.go:172] (0x4000af6000) Reply frame received for 3\nI1006 21:08:39.152374    3116 log.go:172] (0x4000af6000) (0x4000537540) Create stream\nI1006 21:08:39.152549    3116 log.go:172] (0x4000af6000) (0x4000537540) Stream added, broadcasting: 5\nI1006 21:08:39.154483    3116 log.go:172] (0x4000af6000) Reply frame received for 5\nI1006 21:08:39.221307    3116 log.go:172] (0x4000af6000) Data frame received for 5\nI1006 21:08:39.221682    3116 log.go:172] (0x4000537540) (5) Data frame handling\nI1006 21:08:39.222540    3116 log.go:172] (0x4000537540) (5) Data frame sent\n+ nslookup nodeport-service\nI1006 21:08:39.292669    3116 log.go:172] (0x4000af6000) Data frame received for 3\nI1006 21:08:39.292828    3116 log.go:172] (0x40009f80a0) (3) Data frame handling\nI1006 21:08:39.293115    3116 log.go:172] (0x40009f80a0) (3) Data frame sent\nI1006 21:08:39.294094    3116 log.go:172] (0x4000af6000) Data frame received for 3\nI1006 21:08:39.294205    3116 log.go:172] (0x40009f80a0) (3) Data frame handling\nI1006 21:08:39.294314    3116 log.go:172] (0x40009f80a0) (3) Data frame sent\nI1006 21:08:39.294803    3116 log.go:172] (0x4000af6000) Data frame received for 3\nI1006 21:08:39.294896    3116 log.go:172] (0x40009f80a0) (3) Data frame handling\nI1006 21:08:39.295359    3116 log.go:172] (0x4000af6000) Data frame received for 5\nI1006 21:08:39.295531    3116 log.go:172] (0x4000537540) (5) Data frame handling\nI1006 21:08:39.297283    3116 log.go:172] (0x4000af6000) Data frame received for 1\nI1006 21:08:39.297354    3116 log.go:172] (0x40009f8000) (1) Data frame handling\nI1006 21:08:39.297441    3116 log.go:172] (0x40009f8000) (1) Data frame sent\nI1006 21:08:39.298768    3116 log.go:172] (0x4000af6000) (0x40009f8000) Stream removed, broadcasting: 1\nI1006 21:08:39.303812    3116 log.go:172] (0x4000af6000) Go away received\nI1006 21:08:39.307078    3116 log.go:172] (0x4000af6000) (0x40009f8000) Stream removed, broadcasting: 1\nI1006 21:08:39.307807    3116 log.go:172] (0x4000af6000) (0x40009f80a0) Stream removed, broadcasting: 3\nI1006 21:08:39.308395    3116 log.go:172] (0x4000af6000) (0x4000537540) Stream removed, broadcasting: 5\n"
Oct  6 21:08:39.318: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-3614.svc.cluster.local\tcanonical name = externalsvc.services-3614.svc.cluster.local.\nName:\texternalsvc.services-3614.svc.cluster.local\nAddress: 10.104.224.65\n\n"
STEP: deleting ReplicationController externalsvc in namespace services-3614, will wait for the garbage collector to delete the pods
Oct  6 21:08:39.400: INFO: Deleting ReplicationController externalsvc took: 25.756967ms
Oct  6 21:08:39.501: INFO: Terminating ReplicationController externalsvc pods took: 100.879532ms
Oct  6 21:08:54.439: INFO: Cleaning up the NodePort to ExternalName test service
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 21:08:54.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-3614" for this suite.
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:27.097 seconds]
[sig-network] Services
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from NodePort to ExternalName [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":278,"completed":188,"skipped":3203,"failed":0}
SSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 21:08:54.492: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:182
[It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 21:08:54.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3405" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":278,"completed":189,"skipped":3206,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with best effort scope. [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 21:08:54.652: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify ResourceQuota with best effort scope. [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a ResourceQuota with best effort scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a ResourceQuota with not best effort scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a best-effort pod
STEP: Ensuring resource quota with best effort scope captures the pod usage
STEP: Ensuring resource quota with not best effort ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
STEP: Creating a not best-effort pod
STEP: Ensuring resource quota with not best effort scope captures the pod usage
STEP: Ensuring resource quota with best effort scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 21:09:11.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-2653" for this suite.

• [SLOW TEST:16.801 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with best effort scope. [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":278,"completed":190,"skipped":3213,"failed":0}
S
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Subdomain [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 21:09:11.454: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for pods for Subdomain [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-4801.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-4801.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-4801.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4801.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-4801.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-4801.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-4801.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-4801.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4801.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-4801.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-4801.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-4801.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-4801.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-4801.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-4801.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-4801.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-4801.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4801.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Oct  6 21:09:17.631: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4801.svc.cluster.local from pod dns-4801/dns-test-022293e8-db03-4d8e-a53e-21069ae6d433: the server could not find the requested resource (get pods dns-test-022293e8-db03-4d8e-a53e-21069ae6d433)
Oct  6 21:09:17.635: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4801.svc.cluster.local from pod dns-4801/dns-test-022293e8-db03-4d8e-a53e-21069ae6d433: the server could not find the requested resource (get pods dns-test-022293e8-db03-4d8e-a53e-21069ae6d433)
Oct  6 21:09:17.640: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4801.svc.cluster.local from pod dns-4801/dns-test-022293e8-db03-4d8e-a53e-21069ae6d433: the server could not find the requested resource (get pods dns-test-022293e8-db03-4d8e-a53e-21069ae6d433)
Oct  6 21:09:17.643: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4801.svc.cluster.local from pod dns-4801/dns-test-022293e8-db03-4d8e-a53e-21069ae6d433: the server could not find the requested resource (get pods dns-test-022293e8-db03-4d8e-a53e-21069ae6d433)
Oct  6 21:09:17.656: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4801.svc.cluster.local from pod dns-4801/dns-test-022293e8-db03-4d8e-a53e-21069ae6d433: the server could not find the requested resource (get pods dns-test-022293e8-db03-4d8e-a53e-21069ae6d433)
Oct  6 21:09:17.660: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4801.svc.cluster.local from pod dns-4801/dns-test-022293e8-db03-4d8e-a53e-21069ae6d433: the server could not find the requested resource (get pods dns-test-022293e8-db03-4d8e-a53e-21069ae6d433)
Oct  6 21:09:17.665: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4801.svc.cluster.local from pod dns-4801/dns-test-022293e8-db03-4d8e-a53e-21069ae6d433: the server could not find the requested resource (get pods dns-test-022293e8-db03-4d8e-a53e-21069ae6d433)
Oct  6 21:09:17.669: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4801.svc.cluster.local from pod dns-4801/dns-test-022293e8-db03-4d8e-a53e-21069ae6d433: the server could not find the requested resource (get pods dns-test-022293e8-db03-4d8e-a53e-21069ae6d433)
Oct  6 21:09:17.683: INFO: Lookups using dns-4801/dns-test-022293e8-db03-4d8e-a53e-21069ae6d433 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4801.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4801.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4801.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4801.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4801.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4801.svc.cluster.local jessie_udp@dns-test-service-2.dns-4801.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4801.svc.cluster.local]

Oct  6 21:09:22.691: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4801.svc.cluster.local from pod dns-4801/dns-test-022293e8-db03-4d8e-a53e-21069ae6d433: the server could not find the requested resource (get pods dns-test-022293e8-db03-4d8e-a53e-21069ae6d433)
Oct  6 21:09:22.696: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4801.svc.cluster.local from pod dns-4801/dns-test-022293e8-db03-4d8e-a53e-21069ae6d433: the server could not find the requested resource (get pods dns-test-022293e8-db03-4d8e-a53e-21069ae6d433)
Oct  6 21:09:22.700: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4801.svc.cluster.local from pod dns-4801/dns-test-022293e8-db03-4d8e-a53e-21069ae6d433: the server could not find the requested resource (get pods dns-test-022293e8-db03-4d8e-a53e-21069ae6d433)
Oct  6 21:09:22.703: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4801.svc.cluster.local from pod dns-4801/dns-test-022293e8-db03-4d8e-a53e-21069ae6d433: the server could not find the requested resource (get pods dns-test-022293e8-db03-4d8e-a53e-21069ae6d433)
Oct  6 21:09:22.713: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4801.svc.cluster.local from pod dns-4801/dns-test-022293e8-db03-4d8e-a53e-21069ae6d433: the server could not find the requested resource (get pods dns-test-022293e8-db03-4d8e-a53e-21069ae6d433)
Oct  6 21:09:22.717: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4801.svc.cluster.local from pod dns-4801/dns-test-022293e8-db03-4d8e-a53e-21069ae6d433: the server could not find the requested resource (get pods dns-test-022293e8-db03-4d8e-a53e-21069ae6d433)
Oct  6 21:09:22.720: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4801.svc.cluster.local from pod dns-4801/dns-test-022293e8-db03-4d8e-a53e-21069ae6d433: the server could not find the requested resource (get pods dns-test-022293e8-db03-4d8e-a53e-21069ae6d433)
Oct  6 21:09:22.723: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4801.svc.cluster.local from pod dns-4801/dns-test-022293e8-db03-4d8e-a53e-21069ae6d433: the server could not find the requested resource (get pods dns-test-022293e8-db03-4d8e-a53e-21069ae6d433)
Oct  6 21:09:22.735: INFO: Lookups using dns-4801/dns-test-022293e8-db03-4d8e-a53e-21069ae6d433 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4801.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4801.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4801.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4801.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4801.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4801.svc.cluster.local jessie_udp@dns-test-service-2.dns-4801.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4801.svc.cluster.local]

Oct  6 21:09:27.689: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4801.svc.cluster.local from pod dns-4801/dns-test-022293e8-db03-4d8e-a53e-21069ae6d433: the server could not find the requested resource (get pods dns-test-022293e8-db03-4d8e-a53e-21069ae6d433)
Oct  6 21:09:27.694: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4801.svc.cluster.local from pod dns-4801/dns-test-022293e8-db03-4d8e-a53e-21069ae6d433: the server could not find the requested resource (get pods dns-test-022293e8-db03-4d8e-a53e-21069ae6d433)
Oct  6 21:09:27.698: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4801.svc.cluster.local from pod dns-4801/dns-test-022293e8-db03-4d8e-a53e-21069ae6d433: the server could not find the requested resource (get pods dns-test-022293e8-db03-4d8e-a53e-21069ae6d433)
Oct  6 21:09:27.702: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4801.svc.cluster.local from pod dns-4801/dns-test-022293e8-db03-4d8e-a53e-21069ae6d433: the server could not find the requested resource (get pods dns-test-022293e8-db03-4d8e-a53e-21069ae6d433)
Oct  6 21:09:27.716: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4801.svc.cluster.local from pod dns-4801/dns-test-022293e8-db03-4d8e-a53e-21069ae6d433: the server could not find the requested resource (get pods dns-test-022293e8-db03-4d8e-a53e-21069ae6d433)
Oct  6 21:09:27.720: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4801.svc.cluster.local from pod dns-4801/dns-test-022293e8-db03-4d8e-a53e-21069ae6d433: the server could not find the requested resource (get pods dns-test-022293e8-db03-4d8e-a53e-21069ae6d433)
Oct  6 21:09:27.724: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4801.svc.cluster.local from pod dns-4801/dns-test-022293e8-db03-4d8e-a53e-21069ae6d433: the server could not find the requested resource (get pods dns-test-022293e8-db03-4d8e-a53e-21069ae6d433)
Oct  6 21:09:27.727: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4801.svc.cluster.local from pod dns-4801/dns-test-022293e8-db03-4d8e-a53e-21069ae6d433: the server could not find the requested resource (get pods dns-test-022293e8-db03-4d8e-a53e-21069ae6d433)
Oct  6 21:09:27.734: INFO: Lookups using dns-4801/dns-test-022293e8-db03-4d8e-a53e-21069ae6d433 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4801.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4801.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4801.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4801.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4801.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4801.svc.cluster.local jessie_udp@dns-test-service-2.dns-4801.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4801.svc.cluster.local]

Oct  6 21:09:32.691: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4801.svc.cluster.local from pod dns-4801/dns-test-022293e8-db03-4d8e-a53e-21069ae6d433: the server could not find the requested resource (get pods dns-test-022293e8-db03-4d8e-a53e-21069ae6d433)
Oct  6 21:09:32.697: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4801.svc.cluster.local from pod dns-4801/dns-test-022293e8-db03-4d8e-a53e-21069ae6d433: the server could not find the requested resource (get pods dns-test-022293e8-db03-4d8e-a53e-21069ae6d433)
Oct  6 21:09:32.701: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4801.svc.cluster.local from pod dns-4801/dns-test-022293e8-db03-4d8e-a53e-21069ae6d433: the server could not find the requested resource (get pods dns-test-022293e8-db03-4d8e-a53e-21069ae6d433)
Oct  6 21:09:32.706: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4801.svc.cluster.local from pod dns-4801/dns-test-022293e8-db03-4d8e-a53e-21069ae6d433: the server could not find the requested resource (get pods dns-test-022293e8-db03-4d8e-a53e-21069ae6d433)
Oct  6 21:09:32.718: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4801.svc.cluster.local from pod dns-4801/dns-test-022293e8-db03-4d8e-a53e-21069ae6d433: the server could not find the requested resource (get pods dns-test-022293e8-db03-4d8e-a53e-21069ae6d433)
Oct  6 21:09:32.722: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4801.svc.cluster.local from pod dns-4801/dns-test-022293e8-db03-4d8e-a53e-21069ae6d433: the server could not find the requested resource (get pods dns-test-022293e8-db03-4d8e-a53e-21069ae6d433)
Oct  6 21:09:32.726: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4801.svc.cluster.local from pod dns-4801/dns-test-022293e8-db03-4d8e-a53e-21069ae6d433: the server could not find the requested resource (get pods dns-test-022293e8-db03-4d8e-a53e-21069ae6d433)
Oct  6 21:09:32.730: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4801.svc.cluster.local from pod dns-4801/dns-test-022293e8-db03-4d8e-a53e-21069ae6d433: the server could not find the requested resource (get pods dns-test-022293e8-db03-4d8e-a53e-21069ae6d433)
Oct  6 21:09:32.737: INFO: Lookups using dns-4801/dns-test-022293e8-db03-4d8e-a53e-21069ae6d433 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4801.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4801.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4801.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4801.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4801.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4801.svc.cluster.local jessie_udp@dns-test-service-2.dns-4801.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4801.svc.cluster.local]

Oct  6 21:09:37.691: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4801.svc.cluster.local from pod dns-4801/dns-test-022293e8-db03-4d8e-a53e-21069ae6d433: the server could not find the requested resource (get pods dns-test-022293e8-db03-4d8e-a53e-21069ae6d433)
Oct  6 21:09:37.697: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4801.svc.cluster.local from pod dns-4801/dns-test-022293e8-db03-4d8e-a53e-21069ae6d433: the server could not find the requested resource (get pods dns-test-022293e8-db03-4d8e-a53e-21069ae6d433)
Oct  6 21:09:37.702: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4801.svc.cluster.local from pod dns-4801/dns-test-022293e8-db03-4d8e-a53e-21069ae6d433: the server could not find the requested resource (get pods dns-test-022293e8-db03-4d8e-a53e-21069ae6d433)
Oct  6 21:09:37.706: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4801.svc.cluster.local from pod dns-4801/dns-test-022293e8-db03-4d8e-a53e-21069ae6d433: the server could not find the requested resource (get pods dns-test-022293e8-db03-4d8e-a53e-21069ae6d433)
Oct  6 21:09:37.718: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4801.svc.cluster.local from pod dns-4801/dns-test-022293e8-db03-4d8e-a53e-21069ae6d433: the server could not find the requested resource (get pods dns-test-022293e8-db03-4d8e-a53e-21069ae6d433)
Oct  6 21:09:37.722: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4801.svc.cluster.local from pod dns-4801/dns-test-022293e8-db03-4d8e-a53e-21069ae6d433: the server could not find the requested resource (get pods dns-test-022293e8-db03-4d8e-a53e-21069ae6d433)
Oct  6 21:09:37.726: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4801.svc.cluster.local from pod dns-4801/dns-test-022293e8-db03-4d8e-a53e-21069ae6d433: the server could not find the requested resource (get pods dns-test-022293e8-db03-4d8e-a53e-21069ae6d433)
Oct  6 21:09:37.753: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4801.svc.cluster.local from pod dns-4801/dns-test-022293e8-db03-4d8e-a53e-21069ae6d433: the server could not find the requested resource (get pods dns-test-022293e8-db03-4d8e-a53e-21069ae6d433)
Oct  6 21:09:37.760: INFO: Lookups using dns-4801/dns-test-022293e8-db03-4d8e-a53e-21069ae6d433 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4801.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4801.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4801.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4801.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4801.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4801.svc.cluster.local jessie_udp@dns-test-service-2.dns-4801.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4801.svc.cluster.local]

Oct  6 21:09:42.691: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4801.svc.cluster.local from pod dns-4801/dns-test-022293e8-db03-4d8e-a53e-21069ae6d433: the server could not find the requested resource (get pods dns-test-022293e8-db03-4d8e-a53e-21069ae6d433)
Oct  6 21:09:42.696: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4801.svc.cluster.local from pod dns-4801/dns-test-022293e8-db03-4d8e-a53e-21069ae6d433: the server could not find the requested resource (get pods dns-test-022293e8-db03-4d8e-a53e-21069ae6d433)
Oct  6 21:09:42.700: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4801.svc.cluster.local from pod dns-4801/dns-test-022293e8-db03-4d8e-a53e-21069ae6d433: the server could not find the requested resource (get pods dns-test-022293e8-db03-4d8e-a53e-21069ae6d433)
Oct  6 21:09:42.703: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4801.svc.cluster.local from pod dns-4801/dns-test-022293e8-db03-4d8e-a53e-21069ae6d433: the server could not find the requested resource (get pods dns-test-022293e8-db03-4d8e-a53e-21069ae6d433)
Oct  6 21:09:42.715: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4801.svc.cluster.local from pod dns-4801/dns-test-022293e8-db03-4d8e-a53e-21069ae6d433: the server could not find the requested resource (get pods dns-test-022293e8-db03-4d8e-a53e-21069ae6d433)
Oct  6 21:09:42.719: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4801.svc.cluster.local from pod dns-4801/dns-test-022293e8-db03-4d8e-a53e-21069ae6d433: the server could not find the requested resource (get pods dns-test-022293e8-db03-4d8e-a53e-21069ae6d433)
Oct  6 21:09:42.723: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4801.svc.cluster.local from pod dns-4801/dns-test-022293e8-db03-4d8e-a53e-21069ae6d433: the server could not find the requested resource (get pods dns-test-022293e8-db03-4d8e-a53e-21069ae6d433)
Oct  6 21:09:42.727: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4801.svc.cluster.local from pod dns-4801/dns-test-022293e8-db03-4d8e-a53e-21069ae6d433: the server could not find the requested resource (get pods dns-test-022293e8-db03-4d8e-a53e-21069ae6d433)
Oct  6 21:09:42.735: INFO: Lookups using dns-4801/dns-test-022293e8-db03-4d8e-a53e-21069ae6d433 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4801.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4801.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4801.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4801.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4801.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4801.svc.cluster.local jessie_udp@dns-test-service-2.dns-4801.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4801.svc.cluster.local]

Oct  6 21:09:47.739: INFO: DNS probes using dns-4801/dns-test-022293e8-db03-4d8e-a53e-21069ae6d433 succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 21:09:48.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-4801" for this suite.

• [SLOW TEST:37.052 seconds]
[sig-network] DNS
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for pods for Subdomain [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":278,"completed":191,"skipped":3214,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD with validation schema [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 21:09:48.509: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD with validation schema [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Oct  6 21:09:48.580: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with known and required properties
Oct  6 21:10:07.444: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9887 create -f -'
Oct  6 21:10:12.055: INFO: stderr: ""
Oct  6 21:10:12.055: INFO: stdout: "e2e-test-crd-publish-openapi-7603-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
Oct  6 21:10:12.056: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9887 delete e2e-test-crd-publish-openapi-7603-crds test-foo'
Oct  6 21:10:13.312: INFO: stderr: ""
Oct  6 21:10:13.312: INFO: stdout: "e2e-test-crd-publish-openapi-7603-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
Oct  6 21:10:13.313: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9887 apply -f -'
Oct  6 21:10:14.895: INFO: stderr: ""
Oct  6 21:10:14.895: INFO: stdout: "e2e-test-crd-publish-openapi-7603-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
Oct  6 21:10:14.896: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9887 delete e2e-test-crd-publish-openapi-7603-crds test-foo'
Oct  6 21:10:16.152: INFO: stderr: ""
Oct  6 21:10:16.152: INFO: stdout: "e2e-test-crd-publish-openapi-7603-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema
Oct  6 21:10:16.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9887 create -f -'
Oct  6 21:10:17.681: INFO: rc: 1
Oct  6 21:10:17.682: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9887 apply -f -'
Oct  6 21:10:19.193: INFO: rc: 1
STEP: client-side validation (kubectl create and apply) rejects request without required properties
Oct  6 21:10:19.194: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9887 create -f -'
Oct  6 21:10:20.740: INFO: rc: 1
Oct  6 21:10:20.741: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9887 apply -f -'
Oct  6 21:10:22.247: INFO: rc: 1
STEP: kubectl explain works to explain CR properties
Oct  6 21:10:22.248: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7603-crds'
Oct  6 21:10:23.799: INFO: stderr: ""
Oct  6 21:10:23.799: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-7603-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n     Foo CRD for Testing\n\nFIELDS:\n   apiVersion\t\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t\n     Specification of Foo\n\n   status\t\n     Status of Foo\n\n"
STEP: kubectl explain works to explain CR properties recursively
Oct  6 21:10:23.804: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7603-crds.metadata'
Oct  6 21:10:25.386: INFO: stderr: ""
Oct  6 21:10:25.387: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-7603-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n     ObjectMeta is metadata that all persisted resources must have, which\n     includes all objects users must create.\n\nFIELDS:\n   annotations\t\n     Annotations is an unstructured key value map stored with a resource that\n     may be set by external tools to store and retrieve arbitrary metadata. They\n     are not queryable and should be preserved when modifying objects. More\n     info: http://kubernetes.io/docs/user-guide/annotations\n\n   clusterName\t\n     The name of the cluster which the object belongs to. This is used to\n     distinguish resources with same name and namespace in different clusters.\n     This field is not set anywhere right now and apiserver is going to ignore\n     it if set in create or update request.\n\n   creationTimestamp\t\n     CreationTimestamp is a timestamp representing the server time when this\n     object was created. It is not guaranteed to be set in happens-before order\n     across separate operations. Clients may not set this value. It is\n     represented in RFC3339 form and is in UTC. Populated by the system.\n     Read-only. Null for lists. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   deletionGracePeriodSeconds\t\n     Number of seconds allowed for this object to gracefully terminate before it\n     will be removed from the system. Only set when deletionTimestamp is also\n     set. May only be shortened. Read-only.\n\n   deletionTimestamp\t\n     DeletionTimestamp is RFC 3339 date and time at which this resource will be\n     deleted. This field is set by the server when a graceful deletion is\n     requested by the user, and is not directly settable by a client. The\n     resource is expected to be deleted (no longer visible from resource lists,\n     and not reachable by name) after the time in this field, once the\n     finalizers list is empty. As long as the finalizers list contains items,\n     deletion is blocked. Once the deletionTimestamp is set, this value may not\n     be unset or be set further into the future, although it may be shortened or\n     the resource may be deleted prior to this time. For example, a user may\n     request that a pod is deleted in 30 seconds. The Kubelet will react by\n     sending a graceful termination signal to the containers in the pod. After\n     that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n     to the container and after cleanup, remove the pod from the API. In the\n     presence of network partitions, this object may still exist after this\n     timestamp, until an administrator or automated process can determine the\n     resource is fully terminated. If not set, graceful deletion of the object\n     has not been requested. Populated by the system when a graceful deletion is\n     requested. Read-only. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   finalizers\t<[]string>\n     Must be empty before the object is deleted from the registry. Each entry is\n     an identifier for the responsible component that will remove the entry from\n     the list. If the deletionTimestamp of the object is non-nil, entries in\n     this list can only be removed. Finalizers may be processed and removed in\n     any order. Order is NOT enforced because it introduces significant risk of\n     stuck finalizers. finalizers is a shared field, any actor with permission\n     can reorder it. If the finalizer list is processed in order, then this can\n     lead to a situation in which the component responsible for the first\n     finalizer in the list is waiting for a signal (field value, external\n     system, or other) produced by a component responsible for a finalizer later\n     in the list, resulting in a deadlock. Without enforced ordering finalizers\n     are free to order amongst themselves and are not vulnerable to ordering\n     changes in the list.\n\n   generateName\t\n     GenerateName is an optional prefix, used by the server, to generate a\n     unique name ONLY IF the Name field has not been provided. If this field is\n     used, the name returned to the client will be different than the name\n     passed. This value will also be combined with a unique suffix. The provided\n     value has the same validation rules as the Name field, and may be truncated\n     by the length of the suffix required to make the value unique on the\n     server. If this field is specified and the generated name exists, the\n     server will NOT return a 409 - instead, it will either return 201 Created\n     or 500 with Reason ServerTimeout indicating a unique name could not be\n     found in the time allotted, and the client should retry (optionally after\n     the time indicated in the Retry-After header). Applied only if Name is not\n     specified. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n   generation\t\n     A sequence number representing a specific generation of the desired state.\n     Populated by the system. Read-only.\n\n   labels\t\n     Map of string keys and values that can be used to organize and categorize\n     (scope and select) objects. May match selectors of replication controllers\n     and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n   managedFields\t<[]Object>\n     ManagedFields maps workflow-id and version to the set of fields that are\n     managed by that workflow. This is mostly for internal housekeeping, and\n     users typically shouldn't need to set or understand this field. A workflow\n     can be the user's name, a controller's name, or the name of a specific\n     apply path like \"ci-cd\". The set of fields is always in the version that\n     the workflow used when modifying the object.\n\n   name\t\n     Name must be unique within a namespace. Is required when creating\n     resources, although some resources may allow a client to request the\n     generation of an appropriate name automatically. Name is primarily intended\n     for creation idempotence and configuration definition. Cannot be updated.\n     More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n   namespace\t\n     Namespace defines the space within each name must be unique. An empty\n     namespace is equivalent to the \"default\" namespace, but \"default\" is the\n     canonical representation. Not all objects are required to be scoped to a\n     namespace - the value of this field for those objects will be empty. Must\n     be a DNS_LABEL. Cannot be updated. More info:\n     http://kubernetes.io/docs/user-guide/namespaces\n\n   ownerReferences\t<[]Object>\n     List of objects depended by this object. If ALL objects in the list have\n     been deleted, this object will be garbage collected. If this object is\n     managed by a controller, then an entry in this list will point to this\n     controller, with the controller field set to true. There cannot be more\n     than one managing controller.\n\n   resourceVersion\t\n     An opaque value that represents the internal version of this object that\n     can be used by clients to determine when objects have changed. May be used\n     for optimistic concurrency, change detection, and the watch operation on a\n     resource or set of resources. Clients must treat these values as opaque and\n     passed unmodified back to the server. They may only be valid for a\n     particular resource or set of resources. Populated by the system.\n     Read-only. Value must be treated as opaque by clients and . More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n   selfLink\t\n     SelfLink is a URL representing this object. Populated by the system.\n     Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n     release and the field is planned to be removed in 1.21 release.\n\n   uid\t\n     UID is the unique in time and space value for this object. It is typically\n     generated by the server on successful creation of a resource and is not\n     allowed to change on PUT operations. Populated by the system. Read-only.\n     More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n"
Oct  6 21:10:25.397: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7603-crds.spec'
Oct  6 21:10:26.962: INFO: stderr: ""
Oct  6 21:10:26.962: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-7603-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n     Specification of Foo\n\nFIELDS:\n   bars\t<[]Object>\n     List of Bars and their specs.\n\n"
Oct  6 21:10:26.964: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7603-crds.spec.bars'
Oct  6 21:10:28.556: INFO: stderr: ""
Oct  6 21:10:28.557: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-7603-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n     List of Bars and their specs.\n\nFIELDS:\n   age\t\n     Age of Bar.\n\n   bazs\t<[]string>\n     List of Bazs.\n\n   name\t -required-\n     Name of Bar.\n\n"
STEP: kubectl explain works to return error when explain is called on property that doesn't exist
Oct  6 21:10:28.559: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7603-crds.spec.bars2'
Oct  6 21:10:30.088: INFO: rc: 1
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 21:10:39.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-9887" for this suite.

• [SLOW TEST:51.329 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD with validation schema [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":278,"completed":192,"skipped":3255,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 21:10:39.839: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir volume type on tmpfs
Oct  6 21:10:39.940: INFO: Waiting up to 5m0s for pod "pod-b0826dce-7b2c-406a-a881-54a725b8097e" in namespace "emptydir-4638" to be "success or failure"
Oct  6 21:10:39.943: INFO: Pod "pod-b0826dce-7b2c-406a-a881-54a725b8097e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.61293ms
Oct  6 21:10:41.950: INFO: Pod "pod-b0826dce-7b2c-406a-a881-54a725b8097e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00987431s
Oct  6 21:10:43.957: INFO: Pod "pod-b0826dce-7b2c-406a-a881-54a725b8097e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017134275s
STEP: Saw pod success
Oct  6 21:10:43.957: INFO: Pod "pod-b0826dce-7b2c-406a-a881-54a725b8097e" satisfied condition "success or failure"
Oct  6 21:10:43.962: INFO: Trying to get logs from node jerma-worker2 pod pod-b0826dce-7b2c-406a-a881-54a725b8097e container test-container: 
STEP: delete the pod
Oct  6 21:10:44.106: INFO: Waiting for pod pod-b0826dce-7b2c-406a-a881-54a725b8097e to disappear
Oct  6 21:10:44.118: INFO: Pod pod-b0826dce-7b2c-406a-a881-54a725b8097e no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 21:10:44.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4638" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":193,"skipped":3266,"failed":0}
SSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 21:10:44.131: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0644 on tmpfs
Oct  6 21:10:44.254: INFO: Waiting up to 5m0s for pod "pod-e1f83b6b-cb42-4c41-998a-b2f099b9c83d" in namespace "emptydir-3396" to be "success or failure"
Oct  6 21:10:44.281: INFO: Pod "pod-e1f83b6b-cb42-4c41-998a-b2f099b9c83d": Phase="Pending", Reason="", readiness=false. Elapsed: 27.414616ms
Oct  6 21:10:46.290: INFO: Pod "pod-e1f83b6b-cb42-4c41-998a-b2f099b9c83d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03573994s
Oct  6 21:10:48.296: INFO: Pod "pod-e1f83b6b-cb42-4c41-998a-b2f099b9c83d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.041655014s
STEP: Saw pod success
Oct  6 21:10:48.296: INFO: Pod "pod-e1f83b6b-cb42-4c41-998a-b2f099b9c83d" satisfied condition "success or failure"
Oct  6 21:10:48.300: INFO: Trying to get logs from node jerma-worker pod pod-e1f83b6b-cb42-4c41-998a-b2f099b9c83d container test-container: 
STEP: delete the pod
Oct  6 21:10:48.365: INFO: Waiting for pod pod-e1f83b6b-cb42-4c41-998a-b2f099b9c83d to disappear
Oct  6 21:10:48.375: INFO: Pod pod-e1f83b6b-cb42-4c41-998a-b2f099b9c83d no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 21:10:48.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3396" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":194,"skipped":3273,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 21:10:48.391: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Oct  6 21:10:48.773: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bcfd2fbc-6ba2-40c0-9f1b-df99a2e23579" in namespace "projected-6783" to be "success or failure"
Oct  6 21:10:48.805: INFO: Pod "downwardapi-volume-bcfd2fbc-6ba2-40c0-9f1b-df99a2e23579": Phase="Pending", Reason="", readiness=false. Elapsed: 31.64214ms
Oct  6 21:10:50.812: INFO: Pod "downwardapi-volume-bcfd2fbc-6ba2-40c0-9f1b-df99a2e23579": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038174252s
Oct  6 21:10:52.819: INFO: Pod "downwardapi-volume-bcfd2fbc-6ba2-40c0-9f1b-df99a2e23579": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.045350227s
STEP: Saw pod success
Oct  6 21:10:52.819: INFO: Pod "downwardapi-volume-bcfd2fbc-6ba2-40c0-9f1b-df99a2e23579" satisfied condition "success or failure"
Oct  6 21:10:52.824: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-bcfd2fbc-6ba2-40c0-9f1b-df99a2e23579 container client-container: 
STEP: delete the pod
Oct  6 21:10:53.019: INFO: Waiting for pod downwardapi-volume-bcfd2fbc-6ba2-40c0-9f1b-df99a2e23579 to disappear
Oct  6 21:10:53.191: INFO: Pod downwardapi-volume-bcfd2fbc-6ba2-40c0-9f1b-df99a2e23579 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 21:10:53.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6783" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":195,"skipped":3293,"failed":0}
SSSSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 21:10:53.203: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Oct  6 21:10:53.278: INFO: Waiting up to 5m0s for pod "downward-api-2093c9e0-d982-4d40-9a31-4a84b4352798" in namespace "downward-api-5725" to be "success or failure"
Oct  6 21:10:53.286: INFO: Pod "downward-api-2093c9e0-d982-4d40-9a31-4a84b4352798": Phase="Pending", Reason="", readiness=false. Elapsed: 8.311319ms
Oct  6 21:10:55.294: INFO: Pod "downward-api-2093c9e0-d982-4d40-9a31-4a84b4352798": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015826831s
Oct  6 21:10:57.301: INFO: Pod "downward-api-2093c9e0-d982-4d40-9a31-4a84b4352798": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023120861s
STEP: Saw pod success
Oct  6 21:10:57.301: INFO: Pod "downward-api-2093c9e0-d982-4d40-9a31-4a84b4352798" satisfied condition "success or failure"
Oct  6 21:10:57.307: INFO: Trying to get logs from node jerma-worker pod downward-api-2093c9e0-d982-4d40-9a31-4a84b4352798 container dapi-container: 
STEP: delete the pod
Oct  6 21:10:57.343: INFO: Waiting for pod downward-api-2093c9e0-d982-4d40-9a31-4a84b4352798 to disappear
Oct  6 21:10:57.357: INFO: Pod downward-api-2093c9e0-d982-4d40-9a31-4a84b4352798 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 21:10:57.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5725" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":278,"completed":196,"skipped":3299,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 21:10:57.375: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
Oct  6 21:10:57.598: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 21:11:03.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-6789" for this suite.

• [SLOW TEST:6.311 seconds]
[k8s.io] InitContainer [NodeConformance]
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":278,"completed":197,"skipped":3318,"failed":0}
S
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-auth] ServiceAccounts
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 21:11:03.687: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: getting the auto-created API token
STEP: reading a file in the container
Oct  6 21:11:08.313: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-5381 pod-service-account-e9de5ff3-03ef-482d-b8a4-dd50a06c3383 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token'
STEP: reading a file in the container
Oct  6 21:11:09.735: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-5381 pod-service-account-e9de5ff3-03ef-482d-b8a4-dd50a06c3383 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt'
STEP: reading a file in the container
Oct  6 21:11:11.202: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-5381 pod-service-account-e9de5ff3-03ef-482d-b8a4-dd50a06c3383 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
[AfterEach] [sig-auth] ServiceAccounts
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 21:11:12.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-5381" for this suite.

• [SLOW TEST:8.981 seconds]
[sig-auth] ServiceAccounts
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount an API token into pods  [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","total":278,"completed":198,"skipped":3319,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group but different versions [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 21:11:12.669: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of same group but different versions [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation
Oct  6 21:11:12.736: INFO: >>> kubeConfig: /root/.kube/config
STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation
Oct  6 21:12:20.030: INFO: >>> kubeConfig: /root/.kube/config
Oct  6 21:12:39.430: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 21:13:46.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-5956" for this suite.

• [SLOW TEST:153.752 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group but different versions [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":278,"completed":199,"skipped":3330,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl expose 
  should create services for rc  [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 21:13:46.422: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should create services for rc  [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating Agnhost RC
Oct  6 21:13:46.499: INFO: namespace kubectl-9331
Oct  6 21:13:46.499: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9331'
Oct  6 21:13:48.123: INFO: stderr: ""
Oct  6 21:13:48.123: INFO: stdout: "replicationcontroller/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Oct  6 21:13:49.130: INFO: Selector matched 1 pods for map[app:agnhost]
Oct  6 21:13:49.131: INFO: Found 0 / 1
Oct  6 21:13:50.130: INFO: Selector matched 1 pods for map[app:agnhost]
Oct  6 21:13:50.130: INFO: Found 0 / 1
Oct  6 21:13:51.131: INFO: Selector matched 1 pods for map[app:agnhost]
Oct  6 21:13:51.131: INFO: Found 0 / 1
Oct  6 21:13:52.131: INFO: Selector matched 1 pods for map[app:agnhost]
Oct  6 21:13:52.131: INFO: Found 1 / 1
Oct  6 21:13:52.131: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Oct  6 21:13:52.137: INFO: Selector matched 1 pods for map[app:agnhost]
Oct  6 21:13:52.137: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Oct  6 21:13:52.137: INFO: wait on agnhost-master startup in kubectl-9331 
Oct  6 21:13:52.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs agnhost-master-26wsx agnhost-master --namespace=kubectl-9331'
Oct  6 21:13:53.403: INFO: stderr: ""
Oct  6 21:13:53.403: INFO: stdout: "Paused\n"
STEP: exposing RC
Oct  6 21:13:53.403: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-9331'
Oct  6 21:13:54.776: INFO: stderr: ""
Oct  6 21:13:54.776: INFO: stdout: "service/rm2 exposed\n"
Oct  6 21:13:54.788: INFO: Service rm2 in namespace kubectl-9331 found.
STEP: exposing service
Oct  6 21:13:56.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-9331'
Oct  6 21:13:58.193: INFO: stderr: ""
Oct  6 21:13:58.193: INFO: stdout: "service/rm3 exposed\n"
Oct  6 21:13:58.198: INFO: Service rm3 in namespace kubectl-9331 found.
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 21:14:00.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9331" for this suite.

• [SLOW TEST:13.800 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl expose
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1189
    should create services for rc  [Conformance]
    /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc  [Conformance]","total":278,"completed":200,"skipped":3350,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should include webhook resources in discovery documents [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 21:14:00.226: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Oct  6 21:14:04.293: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Oct  6 21:14:06.378: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737615644, loc:(*time.Location)(0x7271fa0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737615644, loc:(*time.Location)(0x7271fa0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737615644, loc:(*time.Location)(0x7271fa0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737615644, loc:(*time.Location)(0x7271fa0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Oct  6 21:14:08.386: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737615644, loc:(*time.Location)(0x7271fa0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737615644, loc:(*time.Location)(0x7271fa0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737615644, loc:(*time.Location)(0x7271fa0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737615644, loc:(*time.Location)(0x7271fa0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Oct  6 21:14:11.415: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should include webhook resources in discovery documents [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: fetching the /apis discovery document
STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document
STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document
STEP: fetching the /apis/admissionregistration.k8s.io discovery document
STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document
STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document
STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 21:14:11.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-2012" for this suite.
STEP: Destroying namespace "webhook-2012-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:11.344 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should include webhook resources in discovery documents [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":278,"completed":201,"skipped":3371,"failed":0}
SSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should have a working scale subresource [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 21:14:11.571: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-6606
[It] should have a working scale subresource [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating statefulset ss in namespace statefulset-6606
Oct  6 21:14:11.688: INFO: Found 0 stateful pods, waiting for 1
Oct  6 21:14:21.697: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: getting scale subresource
STEP: updating a scale subresource
STEP: verifying the statefulset Spec.Replicas was modified
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Oct  6 21:14:21.736: INFO: Deleting all statefulset in ns statefulset-6606
Oct  6 21:14:21.773: INFO: Scaling statefulset ss to 0
Oct  6 21:14:41.889: INFO: Waiting for statefulset status.replicas updated to 0
Oct  6 21:14:41.894: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 21:14:41.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-6606" for this suite.

• [SLOW TEST:30.377 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    should have a working scale subresource [Conformance]
    /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":278,"completed":202,"skipped":3377,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 21:14:41.951: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should receive events on concurrent watches in same order [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 21:14:46.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-661" for this suite.

• [SLOW TEST:5.097 seconds]
[sig-api-machinery] Watchers
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should receive events on concurrent watches in same order [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":278,"completed":203,"skipped":3388,"failed":0}
SSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 21:14:47.050: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Oct  6 21:14:47.132: INFO: Waiting up to 5m0s for pod "downward-api-515f3c20-9816-44fa-b8e5-be5e6c2537c0" in namespace "downward-api-8343" to be "success or failure"
Oct  6 21:14:47.180: INFO: Pod "downward-api-515f3c20-9816-44fa-b8e5-be5e6c2537c0": Phase="Pending", Reason="", readiness=false. Elapsed: 48.602946ms
Oct  6 21:14:49.187: INFO: Pod "downward-api-515f3c20-9816-44fa-b8e5-be5e6c2537c0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055236208s
Oct  6 21:14:51.195: INFO: Pod "downward-api-515f3c20-9816-44fa-b8e5-be5e6c2537c0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.062671279s
STEP: Saw pod success
Oct  6 21:14:51.195: INFO: Pod "downward-api-515f3c20-9816-44fa-b8e5-be5e6c2537c0" satisfied condition "success or failure"
Oct  6 21:14:51.199: INFO: Trying to get logs from node jerma-worker pod downward-api-515f3c20-9816-44fa-b8e5-be5e6c2537c0 container dapi-container: 
STEP: delete the pod
Oct  6 21:14:51.289: INFO: Waiting for pod downward-api-515f3c20-9816-44fa-b8e5-be5e6c2537c0 to disappear
Oct  6 21:14:51.311: INFO: Pod downward-api-515f3c20-9816-44fa-b8e5-be5e6c2537c0 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 21:14:51.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8343" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":278,"completed":204,"skipped":3391,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny pod and configmap creation [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 21:14:51.326: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Oct  6 21:14:56.412: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Oct  6 21:14:58.431: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737615696, loc:(*time.Location)(0x7271fa0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737615696, loc:(*time.Location)(0x7271fa0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737615696, loc:(*time.Location)(0x7271fa0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737615696, loc:(*time.Location)(0x7271fa0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Oct  6 21:15:00.439: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737615696, loc:(*time.Location)(0x7271fa0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737615696, loc:(*time.Location)(0x7271fa0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737615696, loc:(*time.Location)(0x7271fa0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737615696, loc:(*time.Location)(0x7271fa0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Oct  6 21:15:03.491: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny pod and configmap creation [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the webhook via the AdmissionRegistration API
STEP: create a pod that should be denied by the webhook
STEP: create a pod that causes the webhook to hang
STEP: create a configmap that should be denied by the webhook
STEP: create a configmap that should be admitted by the webhook
STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook
STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook
STEP: create a namespace that bypass the webhook
STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 21:15:13.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3590" for this suite.
STEP: Destroying namespace "webhook-3590-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:22.697 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny pod and configmap creation [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":278,"completed":205,"skipped":3410,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 21:15:14.026: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a service. [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Service
STEP: Ensuring resource quota status captures service creation
STEP: Deleting a Service
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 21:15:25.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-5801" for this suite.

• [SLOW TEST:11.260 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":278,"completed":206,"skipped":3426,"failed":0}
SS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 21:15:25.287: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test env composition
Oct  6 21:15:25.419: INFO: Waiting up to 5m0s for pod "var-expansion-e55e1dbc-6038-43db-9836-0ea9e14c252c" in namespace "var-expansion-4831" to be "success or failure"
Oct  6 21:15:25.475: INFO: Pod "var-expansion-e55e1dbc-6038-43db-9836-0ea9e14c252c": Phase="Pending", Reason="", readiness=false. Elapsed: 55.747243ms
Oct  6 21:15:27.481: INFO: Pod "var-expansion-e55e1dbc-6038-43db-9836-0ea9e14c252c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062159808s
Oct  6 21:15:29.488: INFO: Pod "var-expansion-e55e1dbc-6038-43db-9836-0ea9e14c252c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.069262103s
STEP: Saw pod success
Oct  6 21:15:29.489: INFO: Pod "var-expansion-e55e1dbc-6038-43db-9836-0ea9e14c252c" satisfied condition "success or failure"
Oct  6 21:15:29.510: INFO: Trying to get logs from node jerma-worker pod var-expansion-e55e1dbc-6038-43db-9836-0ea9e14c252c container dapi-container: 
STEP: delete the pod
Oct  6 21:15:29.576: INFO: Waiting for pod var-expansion-e55e1dbc-6038-43db-9836-0ea9e14c252c to disappear
Oct  6 21:15:29.602: INFO: Pod var-expansion-e55e1dbc-6038-43db-9836-0ea9e14c252c no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 21:15:29.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-4831" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":278,"completed":207,"skipped":3428,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 21:15:29.627: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0644 on node default medium
Oct  6 21:15:29.694: INFO: Waiting up to 5m0s for pod "pod-964a3c33-60a9-42e1-83a3-30b916905c46" in namespace "emptydir-1037" to be "success or failure"
Oct  6 21:15:29.750: INFO: Pod "pod-964a3c33-60a9-42e1-83a3-30b916905c46": Phase="Pending", Reason="", readiness=false. Elapsed: 55.868656ms
Oct  6 21:15:31.757: INFO: Pod "pod-964a3c33-60a9-42e1-83a3-30b916905c46": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062366025s
Oct  6 21:15:33.763: INFO: Pod "pod-964a3c33-60a9-42e1-83a3-30b916905c46": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.068821733s
STEP: Saw pod success
Oct  6 21:15:33.763: INFO: Pod "pod-964a3c33-60a9-42e1-83a3-30b916905c46" satisfied condition "success or failure"
Oct  6 21:15:33.768: INFO: Trying to get logs from node jerma-worker pod pod-964a3c33-60a9-42e1-83a3-30b916905c46 container test-container: 
STEP: delete the pod
Oct  6 21:15:33.799: INFO: Waiting for pod pod-964a3c33-60a9-42e1-83a3-30b916905c46 to disappear
Oct  6 21:15:33.804: INFO: Pod pod-964a3c33-60a9-42e1-83a3-30b916905c46 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 21:15:33.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1037" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":208,"skipped":3461,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 21:15:33.823: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod busybox-5b355eeb-c182-4df8-98f5-f822e18b8993 in namespace container-probe-7909
Oct  6 21:15:37.895: INFO: Started pod busybox-5b355eeb-c182-4df8-98f5-f822e18b8993 in namespace container-probe-7909
STEP: checking the pod's current state and verifying that restartCount is present
Oct  6 21:15:37.899: INFO: Initial restart count of pod busybox-5b355eeb-c182-4df8-98f5-f822e18b8993 is 0
Oct  6 21:16:26.129: INFO: Restart count of pod container-probe-7909/busybox-5b355eeb-c182-4df8-98f5-f822e18b8993 is now 1 (48.229432737s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 21:16:26.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-7909" for this suite.

• [SLOW TEST:52.354 seconds]
[k8s.io] Probing container
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":209,"skipped":3514,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 21:16:26.179: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-9187.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-9187.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9187.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-9187.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-9187.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9187.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Oct  6 21:16:32.464: INFO: DNS probes using dns-9187/dns-test-16141504-f8fa-4271-9448-62a5801672a7 succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 21:16:32.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-9187" for this suite.

• [SLOW TEST:6.443 seconds]
[sig-network] DNS
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":278,"completed":210,"skipped":3529,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] ConfigMap
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 21:16:32.625: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap configmap-6948/configmap-test-868d6a4e-1190-449f-9b59-d93547abf46d
STEP: Creating a pod to test consume configMaps
Oct  6 21:16:33.091: INFO: Waiting up to 5m0s for pod "pod-configmaps-a1481cda-66ba-4d03-836c-8294572f5af6" in namespace "configmap-6948" to be "success or failure"
Oct  6 21:16:33.114: INFO: Pod "pod-configmaps-a1481cda-66ba-4d03-836c-8294572f5af6": Phase="Pending", Reason="", readiness=false. Elapsed: 22.909157ms
Oct  6 21:16:35.171: INFO: Pod "pod-configmaps-a1481cda-66ba-4d03-836c-8294572f5af6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.079690818s
Oct  6 21:16:37.253: INFO: Pod "pod-configmaps-a1481cda-66ba-4d03-836c-8294572f5af6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.162036932s
STEP: Saw pod success
Oct  6 21:16:37.254: INFO: Pod "pod-configmaps-a1481cda-66ba-4d03-836c-8294572f5af6" satisfied condition "success or failure"
Oct  6 21:16:37.258: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-a1481cda-66ba-4d03-836c-8294572f5af6 container env-test: 
STEP: delete the pod
Oct  6 21:16:37.284: INFO: Waiting for pod pod-configmaps-a1481cda-66ba-4d03-836c-8294572f5af6 to disappear
Oct  6 21:16:37.289: INFO: Pod pod-configmaps-a1481cda-66ba-4d03-836c-8294572f5af6 no longer exists
[AfterEach] [sig-node] ConfigMap
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 21:16:37.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6948" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":211,"skipped":3550,"failed":0}
SS
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Aggregator
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 21:16:37.302: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename aggregator
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Aggregator
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76
Oct  6 21:16:37.399: INFO: >>> kubeConfig: /root/.kube/config
[It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the sample API server.
Oct  6 21:16:40.913: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set
Oct  6 21:16:43.139: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737615800, loc:(*time.Location)(0x7271fa0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737615800, loc:(*time.Location)(0x7271fa0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737615800, loc:(*time.Location)(0x7271fa0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737615800, loc:(*time.Location)(0x7271fa0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Oct  6 21:16:45.144: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737615800, loc:(*time.Location)(0x7271fa0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737615800, loc:(*time.Location)(0x7271fa0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737615800, loc:(*time.Location)(0x7271fa0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737615800, loc:(*time.Location)(0x7271fa0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Oct  6 21:16:47.715: INFO: Waited 549.00112ms for the sample-apiserver to be ready to handle requests.
[AfterEach] [sig-api-machinery] Aggregator
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67
[AfterEach] [sig-api-machinery] Aggregator
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 21:16:48.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-8495" for this suite.

• [SLOW TEST:10.944 seconds]
[sig-api-machinery] Aggregator
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]","total":278,"completed":212,"skipped":3552,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 21:16:48.248: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: validating cluster-info
Oct  6 21:16:48.512: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Oct  6 21:16:49.764: INFO: stderr: ""
Oct  6 21:16:49.764: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:39833\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:39833/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 21:16:49.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5554" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info  [Conformance]","total":278,"completed":213,"skipped":3575,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 21:16:49.780: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-map-956ed93e-53c8-4a22-b435-5f15d79d7dba
STEP: Creating a pod to test consume configMaps
Oct  6 21:16:49.845: INFO: Waiting up to 5m0s for pod "pod-configmaps-989cd57c-d906-4e2d-9a50-4cf71a7b65c9" in namespace "configmap-2713" to be "success or failure"
Oct  6 21:16:49.850: INFO: Pod "pod-configmaps-989cd57c-d906-4e2d-9a50-4cf71a7b65c9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.622643ms
Oct  6 21:16:51.857: INFO: Pod "pod-configmaps-989cd57c-d906-4e2d-9a50-4cf71a7b65c9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011642611s
Oct  6 21:16:53.868: INFO: Pod "pod-configmaps-989cd57c-d906-4e2d-9a50-4cf71a7b65c9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022937602s
STEP: Saw pod success
Oct  6 21:16:53.868: INFO: Pod "pod-configmaps-989cd57c-d906-4e2d-9a50-4cf71a7b65c9" satisfied condition "success or failure"
Oct  6 21:16:53.880: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-989cd57c-d906-4e2d-9a50-4cf71a7b65c9 container configmap-volume-test: 
STEP: delete the pod
Oct  6 21:16:53.910: INFO: Waiting for pod pod-configmaps-989cd57c-d906-4e2d-9a50-4cf71a7b65c9 to disappear
Oct  6 21:16:53.914: INFO: Pod pod-configmaps-989cd57c-d906-4e2d-9a50-4cf71a7b65c9 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 21:16:53.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2713" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":214,"skipped":3599,"failed":0}
SSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 21:16:53.941: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Oct  6 21:16:54.001: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Oct  6 21:16:54.033: INFO: Waiting for terminating namespaces to be deleted...
Oct  6 21:16:54.057: INFO: 
Logging pods the kubelet thinks is on node jerma-worker before test
Oct  6 21:16:54.070: INFO: kube-proxy-knc9b from kube-system started at 2020-09-23 08:27:39 +0000 UTC (1 container statuses recorded)
Oct  6 21:16:54.070: INFO: 	Container kube-proxy ready: true, restart count 0
Oct  6 21:16:54.071: INFO: kindnet-nlsvd from kube-system started at 2020-09-23 08:27:39 +0000 UTC (1 container statuses recorded)
Oct  6 21:16:54.071: INFO: 	Container kindnet-cni ready: true, restart count 0
Oct  6 21:16:54.071: INFO: 
Logging pods the kubelet thinks is on node jerma-worker2 before test
Oct  6 21:16:54.081: INFO: kindnet-5wksn from kube-system started at 2020-09-23 08:27:38 +0000 UTC (1 container statuses recorded)
Oct  6 21:16:54.081: INFO: 	Container kindnet-cni ready: true, restart count 0
Oct  6 21:16:54.082: INFO: kube-proxy-jgndm from kube-system started at 2020-09-23 08:27:38 +0000 UTC (1 container statuses recorded)
Oct  6 21:16:54.082: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.163b83ce33d54b13], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 21:16:55.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-8427" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77
•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","total":278,"completed":215,"skipped":3606,"failed":0}
SSSS
------------------------------
[k8s.io] Security Context When creating a container with runAsUser 
  should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Security Context
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 21:16:55.158: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39
[It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Oct  6 21:16:55.219: INFO: Waiting up to 5m0s for pod "busybox-user-65534-c209e615-7b51-450e-a53b-a345712ee742" in namespace "security-context-test-1268" to be "success or failure"
Oct  6 21:16:55.234: INFO: Pod "busybox-user-65534-c209e615-7b51-450e-a53b-a345712ee742": Phase="Pending", Reason="", readiness=false. Elapsed: 14.013414ms
Oct  6 21:16:57.241: INFO: Pod "busybox-user-65534-c209e615-7b51-450e-a53b-a345712ee742": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020964839s
Oct  6 21:16:59.247: INFO: Pod "busybox-user-65534-c209e615-7b51-450e-a53b-a345712ee742": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027426229s
Oct  6 21:17:01.255: INFO: Pod "busybox-user-65534-c209e615-7b51-450e-a53b-a345712ee742": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.035061092s
Oct  6 21:17:01.255: INFO: Pod "busybox-user-65534-c209e615-7b51-450e-a53b-a345712ee742" satisfied condition "success or failure"
[AfterEach] [k8s.io] Security Context
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 21:17:01.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-1268" for this suite.

• [SLOW TEST:6.111 seconds]
[k8s.io] Security Context
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  When creating a container with runAsUser
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:43
    should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":216,"skipped":3610,"failed":0}
SS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 21:17:01.271: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Oct  6 21:17:01.351: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-2199 /api/v1/namespaces/watch-2199/configmaps/e2e-watch-test-configmap-a 6cb077af-4e5a-47ed-b8c9-472c774afc13 3616952 0 2020-10-06 21:17:01 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Oct  6 21:17:01.352: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-2199 /api/v1/namespaces/watch-2199/configmaps/e2e-watch-test-configmap-a 6cb077af-4e5a-47ed-b8c9-472c774afc13 3616952 0 2020-10-06 21:17:01 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Oct  6 21:17:11.362: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-2199 /api/v1/namespaces/watch-2199/configmaps/e2e-watch-test-configmap-a 6cb077af-4e5a-47ed-b8c9-472c774afc13 3616993 0 2020-10-06 21:17:01 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Oct  6 21:17:11.363: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-2199 /api/v1/namespaces/watch-2199/configmaps/e2e-watch-test-configmap-a 6cb077af-4e5a-47ed-b8c9-472c774afc13 3616993 0 2020-10-06 21:17:01 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Oct  6 21:17:21.374: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-2199 /api/v1/namespaces/watch-2199/configmaps/e2e-watch-test-configmap-a 6cb077af-4e5a-47ed-b8c9-472c774afc13 3617023 0 2020-10-06 21:17:01 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Oct  6 21:17:21.375: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-2199 /api/v1/namespaces/watch-2199/configmaps/e2e-watch-test-configmap-a 6cb077af-4e5a-47ed-b8c9-472c774afc13 3617023 0 2020-10-06 21:17:01 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Oct  6 21:17:31.385: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-2199 /api/v1/namespaces/watch-2199/configmaps/e2e-watch-test-configmap-a 6cb077af-4e5a-47ed-b8c9-472c774afc13 3617053 0 2020-10-06 21:17:01 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Oct  6 21:17:31.385: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-2199 /api/v1/namespaces/watch-2199/configmaps/e2e-watch-test-configmap-a 6cb077af-4e5a-47ed-b8c9-472c774afc13 3617053 0 2020-10-06 21:17:01 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Oct  6 21:17:41.397: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-2199 /api/v1/namespaces/watch-2199/configmaps/e2e-watch-test-configmap-b e6eaaea3-7a9a-497e-8c0b-a4312bce08e1 3617083 0 2020-10-06 21:17:41 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Oct  6 21:17:41.398: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-2199 /api/v1/namespaces/watch-2199/configmaps/e2e-watch-test-configmap-b e6eaaea3-7a9a-497e-8c0b-a4312bce08e1 3617083 0 2020-10-06 21:17:41 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Oct  6 21:17:51.408: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-2199 /api/v1/namespaces/watch-2199/configmaps/e2e-watch-test-configmap-b e6eaaea3-7a9a-497e-8c0b-a4312bce08e1 3617113 0 2020-10-06 21:17:41 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Oct  6 21:17:51.409: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-2199 /api/v1/namespaces/watch-2199/configmaps/e2e-watch-test-configmap-b e6eaaea3-7a9a-497e-8c0b-a4312bce08e1 3617113 0 2020-10-06 21:17:41 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 21:18:01.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-2199" for this suite.

• [SLOW TEST:60.161 seconds]
[sig-api-machinery] Watchers
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":278,"completed":217,"skipped":3612,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 21:18:01.434: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should run and stop simple daemon [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Oct  6 21:18:01.593: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct  6 21:18:01.602: INFO: Number of nodes with available pods: 0
Oct  6 21:18:01.602: INFO: Node jerma-worker is running more than one daemon pod
Oct  6 21:18:02.617: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct  6 21:18:02.624: INFO: Number of nodes with available pods: 0
Oct  6 21:18:02.624: INFO: Node jerma-worker is running more than one daemon pod
Oct  6 21:18:03.797: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct  6 21:18:03.813: INFO: Number of nodes with available pods: 0
Oct  6 21:18:03.813: INFO: Node jerma-worker is running more than one daemon pod
Oct  6 21:18:04.616: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct  6 21:18:04.623: INFO: Number of nodes with available pods: 0
Oct  6 21:18:04.623: INFO: Node jerma-worker is running more than one daemon pod
Oct  6 21:18:05.614: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct  6 21:18:05.621: INFO: Number of nodes with available pods: 1
Oct  6 21:18:05.621: INFO: Node jerma-worker is running more than one daemon pod
Oct  6 21:18:06.618: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct  6 21:18:06.665: INFO: Number of nodes with available pods: 2
Oct  6 21:18:06.665: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Stop a daemon pod, check that the daemon pod is revived.
Oct  6 21:18:06.693: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct  6 21:18:06.700: INFO: Number of nodes with available pods: 1
Oct  6 21:18:06.700: INFO: Node jerma-worker2 is running more than one daemon pod
Oct  6 21:18:07.709: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct  6 21:18:07.715: INFO: Number of nodes with available pods: 1
Oct  6 21:18:07.715: INFO: Node jerma-worker2 is running more than one daemon pod
Oct  6 21:18:08.713: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct  6 21:18:08.720: INFO: Number of nodes with available pods: 1
Oct  6 21:18:08.720: INFO: Node jerma-worker2 is running more than one daemon pod
Oct  6 21:18:09.713: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct  6 21:18:09.719: INFO: Number of nodes with available pods: 1
Oct  6 21:18:09.719: INFO: Node jerma-worker2 is running more than one daemon pod
Oct  6 21:18:10.713: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct  6 21:18:10.719: INFO: Number of nodes with available pods: 1
Oct  6 21:18:10.719: INFO: Node jerma-worker2 is running more than one daemon pod
Oct  6 21:18:11.713: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct  6 21:18:11.719: INFO: Number of nodes with available pods: 1
Oct  6 21:18:11.719: INFO: Node jerma-worker2 is running more than one daemon pod
Oct  6 21:18:12.714: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct  6 21:18:12.722: INFO: Number of nodes with available pods: 1
Oct  6 21:18:12.722: INFO: Node jerma-worker2 is running more than one daemon pod
Oct  6 21:18:13.720: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct  6 21:18:13.744: INFO: Number of nodes with available pods: 1
Oct  6 21:18:13.744: INFO: Node jerma-worker2 is running more than one daemon pod
Oct  6 21:18:14.713: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct  6 21:18:14.719: INFO: Number of nodes with available pods: 1
Oct  6 21:18:14.719: INFO: Node jerma-worker2 is running more than one daemon pod
Oct  6 21:18:15.712: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct  6 21:18:15.719: INFO: Number of nodes with available pods: 1
Oct  6 21:18:15.719: INFO: Node jerma-worker2 is running more than one daemon pod
Oct  6 21:18:16.714: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct  6 21:18:16.722: INFO: Number of nodes with available pods: 2
Oct  6 21:18:16.722: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-109, will wait for the garbage collector to delete the pods
Oct  6 21:18:16.793: INFO: Deleting DaemonSet.extensions daemon-set took: 9.30475ms
Oct  6 21:18:17.194: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.827471ms
Oct  6 21:18:24.401: INFO: Number of nodes with available pods: 0
Oct  6 21:18:24.401: INFO: Number of running nodes: 0, number of available pods: 0
Oct  6 21:18:24.406: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-109/daemonsets","resourceVersion":"3617272"},"items":null}

Oct  6 21:18:24.408: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-109/pods","resourceVersion":"3617272"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 21:18:24.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-109" for this suite.

• [SLOW TEST:23.029 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop simple daemon [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":278,"completed":218,"skipped":3623,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 21:18:24.469: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W1006 21:18:54.729382       7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Oct  6 21:18:54.729: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 21:18:54.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-343" for this suite.

• [SLOW TEST:30.273 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":278,"completed":219,"skipped":3686,"failed":0}
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 21:18:54.743: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Oct  6 21:18:55.514: INFO: Pod name wrapped-volume-race-c74524d4-25a2-4fc9-8f14-2e4df7d2e2e1: Found 0 pods out of 5
Oct  6 21:19:00.934: INFO: Pod name wrapped-volume-race-c74524d4-25a2-4fc9-8f14-2e4df7d2e2e1: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-c74524d4-25a2-4fc9-8f14-2e4df7d2e2e1 in namespace emptydir-wrapper-2518, will wait for the garbage collector to delete the pods
Oct  6 21:19:15.269: INFO: Deleting ReplicationController wrapped-volume-race-c74524d4-25a2-4fc9-8f14-2e4df7d2e2e1 took: 218.174106ms
Oct  6 21:19:15.570: INFO: Terminating ReplicationController wrapped-volume-race-c74524d4-25a2-4fc9-8f14-2e4df7d2e2e1 pods took: 300.633368ms
STEP: Creating RC which spawns configmap-volume pods
Oct  6 21:19:23.827: INFO: Pod name wrapped-volume-race-7469d7af-7fb7-4d17-a10b-ef25dfa77e33: Found 1 pods out of 5
Oct  6 21:19:28.846: INFO: Pod name wrapped-volume-race-7469d7af-7fb7-4d17-a10b-ef25dfa77e33: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-7469d7af-7fb7-4d17-a10b-ef25dfa77e33 in namespace emptydir-wrapper-2518, will wait for the garbage collector to delete the pods
Oct  6 21:19:43.051: INFO: Deleting ReplicationController wrapped-volume-race-7469d7af-7fb7-4d17-a10b-ef25dfa77e33 took: 16.870876ms
Oct  6 21:19:43.351: INFO: Terminating ReplicationController wrapped-volume-race-7469d7af-7fb7-4d17-a10b-ef25dfa77e33 pods took: 300.572926ms
STEP: Creating RC which spawns configmap-volume pods
Oct  6 21:19:54.802: INFO: Pod name wrapped-volume-race-55b65743-5637-4ace-8365-84f2f99a5372: Found 0 pods out of 5
Oct  6 21:19:59.819: INFO: Pod name wrapped-volume-race-55b65743-5637-4ace-8365-84f2f99a5372: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-55b65743-5637-4ace-8365-84f2f99a5372 in namespace emptydir-wrapper-2518, will wait for the garbage collector to delete the pods
Oct  6 21:20:13.948: INFO: Deleting ReplicationController wrapped-volume-race-55b65743-5637-4ace-8365-84f2f99a5372 took: 31.013212ms
Oct  6 21:20:14.249: INFO: Terminating ReplicationController wrapped-volume-race-55b65743-5637-4ace-8365-84f2f99a5372 pods took: 300.891585ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 21:20:25.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-2518" for this suite.

• [SLOW TEST:90.545 seconds]
[sig-storage] EmptyDir wrapper volumes
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":278,"completed":220,"skipped":3686,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] HostPath
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 21:20:25.291: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test hostPath mode
Oct  6 21:20:25.406: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-9526" to be "success or failure"
Oct  6 21:20:25.433: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 26.471618ms
Oct  6 21:20:27.494: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.087733648s
Oct  6 21:20:29.501: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.094225797s
Oct  6 21:20:31.509: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.102440607s
STEP: Saw pod success
Oct  6 21:20:31.509: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Oct  6 21:20:31.514: INFO: Trying to get logs from node jerma-worker pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Oct  6 21:20:31.587: INFO: Waiting for pod pod-host-path-test to disappear
Oct  6 21:20:31.592: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 21:20:31.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-9526" for this suite.

• [SLOW TEST:6.344 seconds]
[sig-storage] HostPath
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":221,"skipped":3710,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 21:20:31.638: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Oct  6 21:20:31.755: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d32537b7-9a89-49c4-85eb-e98404ca61f7" in namespace "downward-api-5482" to be "success or failure"
Oct  6 21:20:31.805: INFO: Pod "downwardapi-volume-d32537b7-9a89-49c4-85eb-e98404ca61f7": Phase="Pending", Reason="", readiness=false. Elapsed: 49.654054ms
Oct  6 21:20:33.877: INFO: Pod "downwardapi-volume-d32537b7-9a89-49c4-85eb-e98404ca61f7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.122054391s
Oct  6 21:20:35.885: INFO: Pod "downwardapi-volume-d32537b7-9a89-49c4-85eb-e98404ca61f7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.129113788s
STEP: Saw pod success
Oct  6 21:20:35.885: INFO: Pod "downwardapi-volume-d32537b7-9a89-49c4-85eb-e98404ca61f7" satisfied condition "success or failure"
Oct  6 21:20:35.890: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-d32537b7-9a89-49c4-85eb-e98404ca61f7 container client-container: 
STEP: delete the pod
Oct  6 21:20:36.052: INFO: Waiting for pod downwardapi-volume-d32537b7-9a89-49c4-85eb-e98404ca61f7 to disappear
Oct  6 21:20:36.132: INFO: Pod downwardapi-volume-d32537b7-9a89-49c4-85eb-e98404ca61f7 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 21:20:36.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5482" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":222,"skipped":3735,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 21:20:36.150: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Oct  6 21:20:36.273: INFO: Waiting up to 5m0s for pod "downwardapi-volume-47ed7a73-66ae-4e53-b0d5-ba75ae793297" in namespace "projected-4600" to be "success or failure"
Oct  6 21:20:36.281: INFO: Pod "downwardapi-volume-47ed7a73-66ae-4e53-b0d5-ba75ae793297": Phase="Pending", Reason="", readiness=false. Elapsed: 7.563145ms
Oct  6 21:20:38.287: INFO: Pod "downwardapi-volume-47ed7a73-66ae-4e53-b0d5-ba75ae793297": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013245865s
Oct  6 21:20:40.294: INFO: Pod "downwardapi-volume-47ed7a73-66ae-4e53-b0d5-ba75ae793297": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02039797s
STEP: Saw pod success
Oct  6 21:20:40.294: INFO: Pod "downwardapi-volume-47ed7a73-66ae-4e53-b0d5-ba75ae793297" satisfied condition "success or failure"
Oct  6 21:20:40.299: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-47ed7a73-66ae-4e53-b0d5-ba75ae793297 container client-container: 
STEP: delete the pod
Oct  6 21:20:40.408: INFO: Waiting for pod downwardapi-volume-47ed7a73-66ae-4e53-b0d5-ba75ae793297 to disappear
Oct  6 21:20:40.418: INFO: Pod downwardapi-volume-47ed7a73-66ae-4e53-b0d5-ba75ae793297 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 21:20:40.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4600" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":223,"skipped":3777,"failed":0}

------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 21:20:40.432: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-7027
[It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating stateful set ss in namespace statefulset-7027
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-7027
Oct  6 21:20:40.547: INFO: Found 0 stateful pods, waiting for 1
Oct  6 21:20:50.555: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Oct  6 21:20:50.562: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7027 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Oct  6 21:20:55.520: INFO: stderr: "I1006 21:20:55.401914    3660 log.go:172] (0x40000fac60) (0x40008f20a0) Create stream\nI1006 21:20:55.404787    3660 log.go:172] (0x40000fac60) (0x40008f20a0) Stream added, broadcasting: 1\nI1006 21:20:55.424805    3660 log.go:172] (0x40000fac60) Reply frame received for 1\nI1006 21:20:55.425468    3660 log.go:172] (0x40000fac60) (0x40008f2140) Create stream\nI1006 21:20:55.425525    3660 log.go:172] (0x40000fac60) (0x40008f2140) Stream added, broadcasting: 3\nI1006 21:20:55.426744    3660 log.go:172] (0x40000fac60) Reply frame received for 3\nI1006 21:20:55.426979    3660 log.go:172] (0x40000fac60) (0x40007f9a40) Create stream\nI1006 21:20:55.427039    3660 log.go:172] (0x40000fac60) (0x40007f9a40) Stream added, broadcasting: 5\nI1006 21:20:55.427962    3660 log.go:172] (0x40000fac60) Reply frame received for 5\nI1006 21:20:55.476522    3660 log.go:172] (0x40000fac60) Data frame received for 5\nI1006 21:20:55.476700    3660 log.go:172] (0x40007f9a40) (5) Data frame handling\nI1006 21:20:55.477070    3660 log.go:172] (0x40007f9a40) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI1006 21:20:55.499303    3660 log.go:172] (0x40000fac60) Data frame received for 5\nI1006 21:20:55.499515    3660 log.go:172] (0x40007f9a40) (5) Data frame handling\nI1006 21:20:55.499780    3660 log.go:172] (0x40000fac60) Data frame received for 3\nI1006 21:20:55.499915    3660 log.go:172] (0x40008f2140) (3) Data frame handling\nI1006 21:20:55.500058    3660 log.go:172] (0x40008f2140) (3) Data frame sent\nI1006 21:20:55.500171    3660 log.go:172] (0x40000fac60) Data frame received for 3\nI1006 21:20:55.500264    3660 log.go:172] (0x40008f2140) (3) Data frame handling\nI1006 21:20:55.501062    3660 log.go:172] (0x40000fac60) Data frame received for 1\nI1006 21:20:55.501162    3660 log.go:172] (0x40008f20a0) (1) Data frame handling\nI1006 21:20:55.501258    3660 log.go:172] (0x40008f20a0) (1) Data frame sent\nI1006 21:20:55.502726    3660 log.go:172] (0x40000fac60) (0x40008f20a0) Stream removed, broadcasting: 1\nI1006 21:20:55.505891    3660 log.go:172] (0x40000fac60) Go away received\nI1006 21:20:55.510326    3660 log.go:172] (0x40000fac60) (0x40008f20a0) Stream removed, broadcasting: 1\nI1006 21:20:55.510924    3660 log.go:172] (0x40000fac60) (0x40008f2140) Stream removed, broadcasting: 3\nI1006 21:20:55.511337    3660 log.go:172] (0x40000fac60) (0x40007f9a40) Stream removed, broadcasting: 5\n"
Oct  6 21:20:55.521: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Oct  6 21:20:55.521: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Oct  6 21:20:55.528: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Oct  6 21:21:05.536: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Oct  6 21:21:05.536: INFO: Waiting for statefulset status.replicas updated to 0
Oct  6 21:21:05.579: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Oct  6 21:21:05.581: INFO: ss-0  jerma-worker  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-06 21:20:40 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-06 21:20:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-06 21:20:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-06 21:20:40 +0000 UTC  }]
Oct  6 21:21:05.581: INFO: 
Oct  6 21:21:05.582: INFO: StatefulSet ss has not reached scale 3, at 1
Oct  6 21:21:06.589: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.968763758s
Oct  6 21:21:07.863: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.961206506s
Oct  6 21:21:08.872: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.687581683s
Oct  6 21:21:09.882: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.67814377s
Oct  6 21:21:10.889: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.66884463s
Oct  6 21:21:11.898: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.660959887s
Oct  6 21:21:13.246: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.652669016s
Oct  6 21:21:14.255: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.304727187s
Oct  6 21:21:15.264: INFO: Verifying statefulset ss doesn't scale past 3 for another 295.399392ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-7027
Oct  6 21:21:16.273: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7027 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct  6 21:21:17.746: INFO: stderr: "I1006 21:21:17.617156    3694 log.go:172] (0x4000ace000) (0x4000ba6000) Create stream\nI1006 21:21:17.623630    3694 log.go:172] (0x4000ace000) (0x4000ba6000) Stream added, broadcasting: 1\nI1006 21:21:17.636572    3694 log.go:172] (0x4000ace000) Reply frame received for 1\nI1006 21:21:17.637690    3694 log.go:172] (0x4000ace000) (0x40007edd60) Create stream\nI1006 21:21:17.637807    3694 log.go:172] (0x4000ace000) (0x40007edd60) Stream added, broadcasting: 3\nI1006 21:21:17.639965    3694 log.go:172] (0x4000ace000) Reply frame received for 3\nI1006 21:21:17.640605    3694 log.go:172] (0x4000ace000) (0x40006aa000) Create stream\nI1006 21:21:17.640726    3694 log.go:172] (0x4000ace000) (0x40006aa000) Stream added, broadcasting: 5\nI1006 21:21:17.642665    3694 log.go:172] (0x4000ace000) Reply frame received for 5\nI1006 21:21:17.726240    3694 log.go:172] (0x4000ace000) Data frame received for 3\nI1006 21:21:17.726409    3694 log.go:172] (0x4000ace000) Data frame received for 1\nI1006 21:21:17.726691    3694 log.go:172] (0x40007edd60) (3) Data frame handling\nI1006 21:21:17.726910    3694 log.go:172] (0x4000ba6000) (1) Data frame handling\nI1006 21:21:17.727216    3694 log.go:172] (0x4000ace000) Data frame received for 5\nI1006 21:21:17.727380    3694 log.go:172] (0x40006aa000) (5) Data frame handling\nI1006 21:21:17.728094    3694 log.go:172] (0x40006aa000) (5) Data frame sent\nI1006 21:21:17.728220    3694 log.go:172] (0x4000ba6000) (1) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI1006 21:21:17.729347    3694 log.go:172] (0x40007edd60) (3) Data frame sent\nI1006 21:21:17.729475    3694 log.go:172] (0x4000ace000) Data frame received for 3\nI1006 21:21:17.729552    3694 log.go:172] (0x40007edd60) (3) Data frame handling\nI1006 21:21:17.729770    3694 log.go:172] (0x4000ace000) Data frame received for 5\nI1006 21:21:17.729905    3694 log.go:172] (0x40006aa000) (5) Data frame handling\nI1006 21:21:17.730683    3694 log.go:172] (0x4000ace000) (0x4000ba6000) Stream removed, broadcasting: 1\nI1006 21:21:17.734639    3694 log.go:172] (0x4000ace000) Go away received\nI1006 21:21:17.737057    3694 log.go:172] (0x4000ace000) (0x4000ba6000) Stream removed, broadcasting: 1\nI1006 21:21:17.737586    3694 log.go:172] (0x4000ace000) (0x40007edd60) Stream removed, broadcasting: 3\nI1006 21:21:17.738143    3694 log.go:172] (0x4000ace000) (0x40006aa000) Stream removed, broadcasting: 5\n"
Oct  6 21:21:17.747: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Oct  6 21:21:17.747: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Oct  6 21:21:17.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7027 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct  6 21:21:19.290: INFO: stderr: "I1006 21:21:19.167808    3718 log.go:172] (0x4000b2e0b0) (0x4000aaa140) Create stream\nI1006 21:21:19.170899    3718 log.go:172] (0x4000b2e0b0) (0x4000aaa140) Stream added, broadcasting: 1\nI1006 21:21:19.188195    3718 log.go:172] (0x4000b2e0b0) Reply frame received for 1\nI1006 21:21:19.189061    3718 log.go:172] (0x4000b2e0b0) (0x4000a9c000) Create stream\nI1006 21:21:19.189142    3718 log.go:172] (0x4000b2e0b0) (0x4000a9c000) Stream added, broadcasting: 3\nI1006 21:21:19.190815    3718 log.go:172] (0x4000b2e0b0) Reply frame received for 3\nI1006 21:21:19.191099    3718 log.go:172] (0x4000b2e0b0) (0x40008ba5a0) Create stream\nI1006 21:21:19.191165    3718 log.go:172] (0x4000b2e0b0) (0x40008ba5a0) Stream added, broadcasting: 5\nI1006 21:21:19.192259    3718 log.go:172] (0x4000b2e0b0) Reply frame received for 5\nI1006 21:21:19.270245    3718 log.go:172] (0x4000b2e0b0) Data frame received for 5\nI1006 21:21:19.270707    3718 log.go:172] (0x4000b2e0b0) Data frame received for 1\nI1006 21:21:19.270932    3718 log.go:172] (0x4000b2e0b0) Data frame received for 3\nI1006 21:21:19.271037    3718 log.go:172] (0x4000a9c000) (3) Data frame handling\nI1006 21:21:19.271120    3718 log.go:172] (0x4000aaa140) (1) Data frame handling\nI1006 21:21:19.271239    3718 log.go:172] (0x40008ba5a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI1006 21:21:19.273169    3718 log.go:172] (0x40008ba5a0) (5) Data frame sent\nI1006 21:21:19.273270    3718 log.go:172] (0x4000aaa140) (1) Data frame sent\nI1006 21:21:19.273452    3718 log.go:172] (0x4000a9c000) (3) Data frame sent\nI1006 21:21:19.273716    3718 log.go:172] (0x4000b2e0b0) Data frame received for 3\nI1006 21:21:19.273792    3718 log.go:172] (0x4000a9c000) (3) Data frame handling\nI1006 21:21:19.274092    3718 log.go:172] (0x4000b2e0b0) Data frame received for 5\nI1006 21:21:19.275279    3718 log.go:172] (0x4000b2e0b0) (0x4000aaa140) Stream removed, broadcasting: 1\nI1006 21:21:19.277296    3718 log.go:172] (0x40008ba5a0) (5) Data frame handling\nI1006 21:21:19.277796    3718 log.go:172] (0x4000b2e0b0) Go away received\nI1006 21:21:19.281131    3718 log.go:172] (0x4000b2e0b0) (0x4000aaa140) Stream removed, broadcasting: 1\nI1006 21:21:19.281521    3718 log.go:172] (0x4000b2e0b0) (0x4000a9c000) Stream removed, broadcasting: 3\nI1006 21:21:19.281803    3718 log.go:172] (0x4000b2e0b0) (0x40008ba5a0) Stream removed, broadcasting: 5\n"
Oct  6 21:21:19.292: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Oct  6 21:21:19.292: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Oct  6 21:21:19.292: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7027 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct  6 21:21:20.721: INFO: stderr: "I1006 21:21:20.615018    3742 log.go:172] (0x4000994b00) (0x4000988140) Create stream\nI1006 21:21:20.617697    3742 log.go:172] (0x4000994b00) (0x4000988140) Stream added, broadcasting: 1\nI1006 21:21:20.629616    3742 log.go:172] (0x4000994b00) Reply frame received for 1\nI1006 21:21:20.630531    3742 log.go:172] (0x4000994b00) (0x40008cfc20) Create stream\nI1006 21:21:20.630621    3742 log.go:172] (0x4000994b00) (0x40008cfc20) Stream added, broadcasting: 3\nI1006 21:21:20.632039    3742 log.go:172] (0x4000994b00) Reply frame received for 3\nI1006 21:21:20.632276    3742 log.go:172] (0x4000994b00) (0x4000ab0000) Create stream\nI1006 21:21:20.632336    3742 log.go:172] (0x4000994b00) (0x4000ab0000) Stream added, broadcasting: 5\nI1006 21:21:20.633618    3742 log.go:172] (0x4000994b00) Reply frame received for 5\nI1006 21:21:20.700955    3742 log.go:172] (0x4000994b00) Data frame received for 3\nI1006 21:21:20.701424    3742 log.go:172] (0x4000994b00) Data frame received for 5\nI1006 21:21:20.701640    3742 log.go:172] (0x40008cfc20) (3) Data frame handling\nI1006 21:21:20.701817    3742 log.go:172] (0x4000994b00) Data frame received for 1\nI1006 21:21:20.701907    3742 log.go:172] (0x4000988140) (1) Data frame handling\nI1006 21:21:20.701997    3742 log.go:172] (0x4000ab0000) (5) Data frame handling\nI1006 21:21:20.703402    3742 log.go:172] (0x4000ab0000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI1006 21:21:20.703610    3742 log.go:172] (0x40008cfc20) (3) Data frame sent\nI1006 21:21:20.704124    3742 log.go:172] (0x4000994b00) Data frame received for 5\nI1006 21:21:20.704252    3742 log.go:172] (0x4000ab0000) (5) Data frame handling\nI1006 21:21:20.704729    3742 log.go:172] (0x4000994b00) Data frame received for 3\nI1006 21:21:20.704929    3742 log.go:172] (0x40008cfc20) (3) Data frame handling\nI1006 21:21:20.705494    3742 log.go:172] (0x4000988140) (1) Data frame sent\nI1006 21:21:20.708312    3742 log.go:172] (0x4000994b00) (0x4000988140) Stream removed, broadcasting: 1\nI1006 21:21:20.708721    3742 log.go:172] (0x4000994b00) Go away received\nI1006 21:21:20.712344    3742 log.go:172] (0x4000994b00) (0x4000988140) Stream removed, broadcasting: 1\nI1006 21:21:20.712611    3742 log.go:172] (0x4000994b00) (0x40008cfc20) Stream removed, broadcasting: 3\nI1006 21:21:20.712825    3742 log.go:172] (0x4000994b00) (0x4000ab0000) Stream removed, broadcasting: 5\n"
Oct  6 21:21:20.721: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Oct  6 21:21:20.721: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Oct  6 21:21:20.728: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Oct  6 21:21:20.728: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Oct  6 21:21:20.728: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Oct  6 21:21:20.734: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7027 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Oct  6 21:21:22.199: INFO: stderr: "I1006 21:21:22.069337    3767 log.go:172] (0x4000a3ab00) (0x40007e99a0) Create stream\nI1006 21:21:22.073398    3767 log.go:172] (0x4000a3ab00) (0x40007e99a0) Stream added, broadcasting: 1\nI1006 21:21:22.087453    3767 log.go:172] (0x4000a3ab00) Reply frame received for 1\nI1006 21:21:22.089036    3767 log.go:172] (0x4000a3ab00) (0x40007e9b80) Create stream\nI1006 21:21:22.089212    3767 log.go:172] (0x4000a3ab00) (0x40007e9b80) Stream added, broadcasting: 3\nI1006 21:21:22.091476    3767 log.go:172] (0x4000a3ab00) Reply frame received for 3\nI1006 21:21:22.091799    3767 log.go:172] (0x4000a3ab00) (0x4000988000) Create stream\nI1006 21:21:22.091885    3767 log.go:172] (0x4000a3ab00) (0x4000988000) Stream added, broadcasting: 5\nI1006 21:21:22.093416    3767 log.go:172] (0x4000a3ab00) Reply frame received for 5\nI1006 21:21:22.178250    3767 log.go:172] (0x4000a3ab00) Data frame received for 3\nI1006 21:21:22.178757    3767 log.go:172] (0x4000a3ab00) Data frame received for 5\nI1006 21:21:22.178908    3767 log.go:172] (0x4000988000) (5) Data frame handling\nI1006 21:21:22.179049    3767 log.go:172] (0x40007e9b80) (3) Data frame handling\nI1006 21:21:22.179259    3767 log.go:172] (0x4000a3ab00) Data frame received for 1\nI1006 21:21:22.179394    3767 log.go:172] (0x40007e99a0) (1) Data frame handling\nI1006 21:21:22.180180    3767 log.go:172] (0x40007e99a0) (1) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI1006 21:21:22.180684    3767 log.go:172] (0x40007e9b80) (3) Data frame sent\nI1006 21:21:22.181371    3767 log.go:172] (0x4000a3ab00) Data frame received for 3\nI1006 21:21:22.181503    3767 log.go:172] (0x40007e9b80) (3) Data frame handling\nI1006 21:21:22.181597    3767 log.go:172] (0x4000988000) (5) Data frame sent\nI1006 21:21:22.181711    3767 log.go:172] (0x4000a3ab00) Data frame received for 5\nI1006 21:21:22.181797    3767 log.go:172] (0x4000988000) (5) Data frame handling\nI1006 21:21:22.182777    3767 log.go:172] (0x4000a3ab00) (0x40007e99a0) Stream removed, broadcasting: 1\nI1006 21:21:22.185757    3767 log.go:172] (0x4000a3ab00) Go away received\nI1006 21:21:22.189426    3767 log.go:172] (0x4000a3ab00) (0x40007e99a0) Stream removed, broadcasting: 1\nI1006 21:21:22.189827    3767 log.go:172] (0x4000a3ab00) (0x40007e9b80) Stream removed, broadcasting: 3\nI1006 21:21:22.190124    3767 log.go:172] (0x4000a3ab00) (0x4000988000) Stream removed, broadcasting: 5\n"
Oct  6 21:21:22.200: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Oct  6 21:21:22.200: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Oct  6 21:21:22.200: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7027 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Oct  6 21:21:23.751: INFO: stderr: "I1006 21:21:23.582043    3789 log.go:172] (0x4000a42000) (0x4000720000) Create stream\nI1006 21:21:23.584486    3789 log.go:172] (0x4000a42000) (0x4000720000) Stream added, broadcasting: 1\nI1006 21:21:23.594010    3789 log.go:172] (0x4000a42000) Reply frame received for 1\nI1006 21:21:23.594554    3789 log.go:172] (0x4000a42000) (0x400058b360) Create stream\nI1006 21:21:23.594613    3789 log.go:172] (0x4000a42000) (0x400058b360) Stream added, broadcasting: 3\nI1006 21:21:23.596401    3789 log.go:172] (0x4000a42000) Reply frame received for 3\nI1006 21:21:23.596908    3789 log.go:172] (0x4000a42000) (0x4000770000) Create stream\nI1006 21:21:23.596997    3789 log.go:172] (0x4000a42000) (0x4000770000) Stream added, broadcasting: 5\nI1006 21:21:23.598508    3789 log.go:172] (0x4000a42000) Reply frame received for 5\nI1006 21:21:23.701090    3789 log.go:172] (0x4000a42000) Data frame received for 5\nI1006 21:21:23.701480    3789 log.go:172] (0x4000770000) (5) Data frame handling\nI1006 21:21:23.702393    3789 log.go:172] (0x4000770000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI1006 21:21:23.731451    3789 log.go:172] (0x4000a42000) Data frame received for 3\nI1006 21:21:23.731702    3789 log.go:172] (0x400058b360) (3) Data frame handling\nI1006 21:21:23.731837    3789 log.go:172] (0x400058b360) (3) Data frame sent\nI1006 21:21:23.731935    3789 log.go:172] (0x4000a42000) Data frame received for 5\nI1006 21:21:23.732086    3789 log.go:172] (0x4000770000) (5) Data frame handling\nI1006 21:21:23.732208    3789 log.go:172] (0x4000a42000) Data frame received for 3\nI1006 21:21:23.732381    3789 log.go:172] (0x400058b360) (3) Data frame handling\nI1006 21:21:23.733070    3789 log.go:172] (0x4000a42000) Data frame received for 1\nI1006 21:21:23.733298    3789 log.go:172] (0x4000720000) (1) Data frame handling\nI1006 21:21:23.733480    3789 log.go:172] (0x4000720000) (1) Data frame sent\nI1006 21:21:23.736272    3789 log.go:172] (0x4000a42000) (0x4000720000) Stream removed, broadcasting: 1\nI1006 21:21:23.738105    3789 log.go:172] (0x4000a42000) Go away received\nI1006 21:21:23.743876    3789 log.go:172] (0x4000a42000) (0x4000720000) Stream removed, broadcasting: 1\nI1006 21:21:23.744255    3789 log.go:172] (0x4000a42000) (0x400058b360) Stream removed, broadcasting: 3\nI1006 21:21:23.744504    3789 log.go:172] (0x4000a42000) (0x4000770000) Stream removed, broadcasting: 5\n"
Oct  6 21:21:23.752: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Oct  6 21:21:23.752: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Oct  6 21:21:23.753: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7027 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Oct  6 21:21:25.289: INFO: stderr: "I1006 21:21:25.133327    3813 log.go:172] (0x4000b04000) (0x4000809a40) Create stream\nI1006 21:21:25.136473    3813 log.go:172] (0x4000b04000) (0x4000809a40) Stream added, broadcasting: 1\nI1006 21:21:25.148474    3813 log.go:172] (0x4000b04000) Reply frame received for 1\nI1006 21:21:25.149082    3813 log.go:172] (0x4000b04000) (0x4000809c20) Create stream\nI1006 21:21:25.149141    3813 log.go:172] (0x4000b04000) (0x4000809c20) Stream added, broadcasting: 3\nI1006 21:21:25.150916    3813 log.go:172] (0x4000b04000) Reply frame received for 3\nI1006 21:21:25.151120    3813 log.go:172] (0x4000b04000) (0x4000809cc0) Create stream\nI1006 21:21:25.151173    3813 log.go:172] (0x4000b04000) (0x4000809cc0) Stream added, broadcasting: 5\nI1006 21:21:25.152506    3813 log.go:172] (0x4000b04000) Reply frame received for 5\nI1006 21:21:25.255691    3813 log.go:172] (0x4000b04000) Data frame received for 5\nI1006 21:21:25.256117    3813 log.go:172] (0x4000809cc0) (5) Data frame handling\nI1006 21:21:25.257207    3813 log.go:172] (0x4000809cc0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI1006 21:21:25.274550    3813 log.go:172] (0x4000b04000) Data frame received for 3\nI1006 21:21:25.274691    3813 log.go:172] (0x4000809c20) (3) Data frame handling\nI1006 21:21:25.274807    3813 log.go:172] (0x4000b04000) Data frame received for 5\nI1006 21:21:25.274965    3813 log.go:172] (0x4000809cc0) (5) Data frame handling\nI1006 21:21:25.275162    3813 log.go:172] (0x4000809c20) (3) Data frame sent\nI1006 21:21:25.275239    3813 log.go:172] (0x4000b04000) Data frame received for 3\nI1006 21:21:25.275287    3813 log.go:172] (0x4000809c20) (3) Data frame handling\nI1006 21:21:25.276161    3813 log.go:172] (0x4000b04000) Data frame received for 1\nI1006 21:21:25.276270    3813 log.go:172] (0x4000809a40) (1) Data frame handling\nI1006 21:21:25.276388    3813 log.go:172] (0x4000809a40) (1) Data frame sent\nI1006 21:21:25.277638    3813 log.go:172] (0x4000b04000) (0x4000809a40) Stream removed, broadcasting: 1\nI1006 21:21:25.280081    3813 log.go:172] (0x4000b04000) Go away received\nI1006 21:21:25.282380    3813 log.go:172] (0x4000b04000) (0x4000809a40) Stream removed, broadcasting: 1\nI1006 21:21:25.282591    3813 log.go:172] (0x4000b04000) (0x4000809c20) Stream removed, broadcasting: 3\nI1006 21:21:25.282849    3813 log.go:172] (0x4000b04000) (0x4000809cc0) Stream removed, broadcasting: 5\n"
Oct  6 21:21:25.291: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Oct  6 21:21:25.291: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Oct  6 21:21:25.291: INFO: Waiting for statefulset status.replicas updated to 0
Oct  6 21:21:25.296: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Oct  6 21:21:35.310: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Oct  6 21:21:35.310: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Oct  6 21:21:35.311: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Oct  6 21:21:35.336: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Oct  6 21:21:35.336: INFO: ss-0  jerma-worker   Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-06 21:20:40 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-06 21:21:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-06 21:21:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-06 21:20:40 +0000 UTC  }]
Oct  6 21:21:35.337: INFO: ss-1  jerma-worker2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-06 21:21:05 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-06 21:21:24 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-06 21:21:24 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-06 21:21:05 +0000 UTC  }]
Oct  6 21:21:35.337: INFO: ss-2  jerma-worker   Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-06 21:21:05 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-06 21:21:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-06 21:21:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-06 21:21:05 +0000 UTC  }]
Oct  6 21:21:35.337: INFO: 
Oct  6 21:21:35.337: INFO: StatefulSet ss has not reached scale 0, at 3
Oct  6 21:21:36.345: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Oct  6 21:21:36.345: INFO: ss-0  jerma-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-06 21:20:40 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-06 21:21:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-06 21:21:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-06 21:20:40 +0000 UTC  }]
Oct  6 21:21:36.345: INFO: ss-1  jerma-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-06 21:21:05 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-06 21:21:24 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-06 21:21:24 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-06 21:21:05 +0000 UTC  }]
Oct  6 21:21:36.345: INFO: ss-2  jerma-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-06 21:21:05 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-06 21:21:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-06 21:21:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-06 21:21:05 +0000 UTC  }]
Oct  6 21:21:36.346: INFO: 
Oct  6 21:21:36.346: INFO: StatefulSet ss has not reached scale 0, at 3
Oct  6 21:21:37.353: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Oct  6 21:21:37.353: INFO: ss-0  jerma-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-06 21:20:40 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-06 21:21:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-06 21:21:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-06 21:20:40 +0000 UTC  }]
Oct  6 21:21:37.354: INFO: ss-1  jerma-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-06 21:21:05 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-06 21:21:24 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-06 21:21:24 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-06 21:21:05 +0000 UTC  }]
Oct  6 21:21:37.354: INFO: ss-2  jerma-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-06 21:21:05 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-06 21:21:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-06 21:21:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-06 21:21:05 +0000 UTC  }]
Oct  6 21:21:37.354: INFO: 
Oct  6 21:21:37.354: INFO: StatefulSet ss has not reached scale 0, at 3
Oct  6 21:21:38.362: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Oct  6 21:21:38.362: INFO: ss-0  jerma-worker  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-06 21:20:40 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-06 21:21:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-06 21:21:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-06 21:20:40 +0000 UTC  }]
Oct  6 21:21:38.362: INFO: ss-2  jerma-worker  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-06 21:21:05 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-06 21:21:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-06 21:21:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-06 21:21:05 +0000 UTC  }]
Oct  6 21:21:38.363: INFO: 
Oct  6 21:21:38.363: INFO: StatefulSet ss has not reached scale 0, at 2
Oct  6 21:21:39.375: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Oct  6 21:21:39.375: INFO: ss-0  jerma-worker  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-06 21:20:40 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-06 21:21:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-06 21:21:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-06 21:20:40 +0000 UTC  }]
Oct  6 21:21:39.375: INFO: 
Oct  6 21:21:39.375: INFO: StatefulSet ss has not reached scale 0, at 1
Oct  6 21:21:40.394: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Oct  6 21:21:40.394: INFO: ss-0  jerma-worker  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-06 21:20:40 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-06 21:21:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-06 21:21:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-06 21:20:40 +0000 UTC  }]
Oct  6 21:21:40.394: INFO: 
Oct  6 21:21:40.394: INFO: StatefulSet ss has not reached scale 0, at 1
Oct  6 21:21:41.402: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Oct  6 21:21:41.402: INFO: ss-0  jerma-worker  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-06 21:20:40 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-06 21:21:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-06 21:21:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-06 21:20:40 +0000 UTC  }]
Oct  6 21:21:41.402: INFO: 
Oct  6 21:21:41.403: INFO: StatefulSet ss has not reached scale 0, at 1
Oct  6 21:21:42.410: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Oct  6 21:21:42.410: INFO: ss-0  jerma-worker  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-06 21:20:40 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-06 21:21:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-06 21:21:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-06 21:20:40 +0000 UTC  }]
Oct  6 21:21:42.411: INFO: 
Oct  6 21:21:42.411: INFO: StatefulSet ss has not reached scale 0, at 1
Oct  6 21:21:43.418: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Oct  6 21:21:43.419: INFO: ss-0  jerma-worker  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-06 21:20:40 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-06 21:21:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-06 21:21:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-06 21:20:40 +0000 UTC  }]
Oct  6 21:21:43.419: INFO: 
Oct  6 21:21:43.419: INFO: StatefulSet ss has not reached scale 0, at 1
Oct  6 21:21:44.425: INFO: Verifying statefulset ss doesn't scale past 0 for another 901.587943ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-7027
Oct  6 21:21:45.446: INFO: Scaling statefulset ss to 0
Oct  6 21:21:45.461: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Oct  6 21:21:45.465: INFO: Deleting all statefulset in ns statefulset-7027
Oct  6 21:21:45.469: INFO: Scaling statefulset ss to 0
Oct  6 21:21:45.480: INFO: Waiting for statefulset status.replicas updated to 0
Oct  6 21:21:45.483: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 21:21:45.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-7027" for this suite.

• [SLOW TEST:65.102 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
    /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":278,"completed":224,"skipped":3777,"failed":0}
S
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny attaching pod [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 21:21:45.535: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Oct  6 21:21:49.125: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Oct  6 21:21:51.459: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737616109, loc:(*time.Location)(0x7271fa0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737616109, loc:(*time.Location)(0x7271fa0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737616109, loc:(*time.Location)(0x7271fa0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737616109, loc:(*time.Location)(0x7271fa0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Oct  6 21:21:54.514: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny attaching pod [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the webhook via the AdmissionRegistration API
STEP: create a pod
STEP: 'kubectl attach' the pod, should be denied by the webhook
Oct  6 21:21:58.603: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config attach --namespace=webhook-6139 to-be-attached-pod -i -c=container1'
Oct  6 21:21:59.925: INFO: rc: 1
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 21:21:59.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-6139" for this suite.
STEP: Destroying namespace "webhook-6139-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:14.570 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny attaching pod [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":278,"completed":225,"skipped":3778,"failed":0}
SSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 21:22:00.106: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Oct  6 21:22:00.246: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 21:22:00.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-9981" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works  [Conformance]","total":278,"completed":226,"skipped":3782,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Guestbook application 
  should create and stop a working application  [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 21:22:00.813: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should create and stop a working application  [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating all guestbook components
Oct  6 21:22:01.110: INFO: apiVersion: v1
kind: Service
metadata:
  name: agnhost-slave
  labels:
    app: agnhost
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: agnhost
    role: slave
    tier: backend

Oct  6 21:22:01.110: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2954'
Oct  6 21:22:02.770: INFO: stderr: ""
Oct  6 21:22:02.770: INFO: stdout: "service/agnhost-slave created\n"
Oct  6 21:22:02.772: INFO: apiVersion: v1
kind: Service
metadata:
  name: agnhost-master
  labels:
    app: agnhost
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: agnhost
    role: master
    tier: backend

Oct  6 21:22:02.772: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2954'
Oct  6 21:22:04.399: INFO: stderr: ""
Oct  6 21:22:04.399: INFO: stdout: "service/agnhost-master created\n"
Oct  6 21:22:04.400: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Oct  6 21:22:04.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2954'
Oct  6 21:22:06.231: INFO: stderr: ""
Oct  6 21:22:06.231: INFO: stdout: "service/frontend created\n"
Oct  6 21:22:06.234: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: guestbook
      tier: frontend
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: guestbook-frontend
        image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8
        args: [ "guestbook", "--backend-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 80

Oct  6 21:22:06.234: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2954'
Oct  6 21:22:07.837: INFO: stderr: ""
Oct  6 21:22:07.837: INFO: stdout: "deployment.apps/frontend created\n"
Oct  6 21:22:07.838: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: agnhost-master
spec:
  replicas: 1
  selector:
    matchLabels:
      app: agnhost
      role: master
      tier: backend
  template:
    metadata:
      labels:
        app: agnhost
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8
        args: [ "guestbook", "--http-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Oct  6 21:22:07.839: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2954'
Oct  6 21:22:09.390: INFO: stderr: ""
Oct  6 21:22:09.390: INFO: stdout: "deployment.apps/agnhost-master created\n"
Oct  6 21:22:09.391: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: agnhost-slave
spec:
  replicas: 2
  selector:
    matchLabels:
      app: agnhost
      role: slave
      tier: backend
  template:
    metadata:
      labels:
        app: agnhost
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8
        args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Oct  6 21:22:09.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2954'
Oct  6 21:22:11.490: INFO: stderr: ""
Oct  6 21:22:11.490: INFO: stdout: "deployment.apps/agnhost-slave created\n"
STEP: validating guestbook app
Oct  6 21:22:11.491: INFO: Waiting for all frontend pods to be Running.
Oct  6 21:22:16.543: INFO: Waiting for frontend to serve content.
Oct  6 21:22:17.574: INFO: Trying to add a new entry to the guestbook.
Oct  6 21:22:17.587: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
Oct  6 21:22:17.596: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2954'
Oct  6 21:22:18.948: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Oct  6 21:22:18.948: INFO: stdout: "service \"agnhost-slave\" force deleted\n"
STEP: using delete to clean up resources
Oct  6 21:22:18.950: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2954'
Oct  6 21:22:20.247: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Oct  6 21:22:20.247: INFO: stdout: "service \"agnhost-master\" force deleted\n"
STEP: using delete to clean up resources
Oct  6 21:22:20.249: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2954'
Oct  6 21:22:21.497: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Oct  6 21:22:21.497: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Oct  6 21:22:21.499: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2954'
Oct  6 21:22:22.703: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Oct  6 21:22:22.703: INFO: stdout: "deployment.apps \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Oct  6 21:22:22.705: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2954'
Oct  6 21:22:24.068: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Oct  6 21:22:24.068: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n"
STEP: using delete to clean up resources
Oct  6 21:22:24.070: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2954'
Oct  6 21:22:25.346: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Oct  6 21:22:25.346: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 21:22:25.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2954" for this suite.

• [SLOW TEST:24.587 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Guestbook application
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:381
    should create and stop a working application  [Conformance]
    /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","total":278,"completed":227,"skipped":3798,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Lease 
  lease API should be available [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Lease
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 21:22:25.402: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename lease-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] lease API should be available [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Lease
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 21:22:26.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "lease-test-4026" for this suite.
•{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":278,"completed":228,"skipped":3814,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 21:22:26.532: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Oct  6 21:22:26.633: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Oct  6 21:22:26.651: INFO: Pod name sample-pod: Found 0 pods out of 1
Oct  6 21:22:31.658: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Oct  6 21:22:31.659: INFO: Creating deployment "test-rolling-update-deployment"
Oct  6 21:22:31.666: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Oct  6 21:22:31.731: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Oct  6 21:22:33.746: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Oct  6 21:22:33.752: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737616151, loc:(*time.Location)(0x7271fa0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737616151, loc:(*time.Location)(0x7271fa0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737616151, loc:(*time.Location)(0x7271fa0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737616151, loc:(*time.Location)(0x7271fa0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)}
Oct  6 21:22:35.758: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Oct  6 21:22:35.776: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:{test-rolling-update-deployment  deployment-3682 /apis/apps/v1/namespaces/deployment-3682/deployments/test-rolling-update-deployment 31f58b8b-9fa9-4bdc-b6e8-96b51df76f2a 3619504 1 2020-10-06 21:22:31 +0000 UTC   map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x4004952028  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-10-06 21:22:31 +0000 UTC,LastTransitionTime:2020-10-06 21:22:31 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-67cf4f6444" has successfully progressed.,LastUpdateTime:2020-10-06 21:22:35 +0000 UTC,LastTransitionTime:2020-10-06 21:22:31 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

Oct  6 21:22:35.785: INFO: New ReplicaSet "test-rolling-update-deployment-67cf4f6444" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:{test-rolling-update-deployment-67cf4f6444  deployment-3682 /apis/apps/v1/namespaces/deployment-3682/replicasets/test-rolling-update-deployment-67cf4f6444 0de03d1f-fbed-4512-b65a-50c5aaf362ee 3619493 1 2020-10-06 21:22:31 +0000 UTC   map[name:sample-pod pod-template-hash:67cf4f6444] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 31f58b8b-9fa9-4bdc-b6e8-96b51df76f2a 0x40049524c7 0x40049524c8}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 67cf4f6444,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod pod-template-hash:67cf4f6444] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x4004952538  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Oct  6 21:22:35.785: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Oct  6 21:22:35.786: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller  deployment-3682 /apis/apps/v1/namespaces/deployment-3682/replicasets/test-rolling-update-controller 15834958-c616-4fa5-b0c9-4bff7c94323a 3619503 2 2020-10-06 21:22:26 +0000 UTC   map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 31f58b8b-9fa9-4bdc-b6e8-96b51df76f2a 0x40049523df 0x40049523f0}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0x4004952458  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Oct  6 21:22:35.795: INFO: Pod "test-rolling-update-deployment-67cf4f6444-ncq5s" is available:
&Pod{ObjectMeta:{test-rolling-update-deployment-67cf4f6444-ncq5s test-rolling-update-deployment-67cf4f6444- deployment-3682 /api/v1/namespaces/deployment-3682/pods/test-rolling-update-deployment-67cf4f6444-ncq5s 9754758e-406f-40a0-9d4f-1b9598e46686 3619492 0 2020-10-06 21:22:31 +0000 UTC   map[name:sample-pod pod-template-hash:67cf4f6444] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-67cf4f6444 0de03d1f-fbed-4512-b65a-50c5aaf362ee 0x40049529a7 0x40049529a8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xm7kx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xm7kx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xm7kx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 21:22:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 21:22:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 21:22:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 21:22:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.10,PodIP:10.244.1.143,StartTime:2020-10-06 21:22:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-10-06 21:22:34 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://4a557eb04bfc9a732a04ad2c40138d1eb49afebc57be7ef70a0577f3219cee04,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.143,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 21:22:35.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-3682" for this suite.

• [SLOW TEST:9.275 seconds]
[sig-apps] Deployment
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":229,"skipped":3848,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 21:22:35.811: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 21:22:42.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-4198" for this suite.

• [SLOW TEST:7.197 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":278,"completed":230,"skipped":3879,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing mutating webhooks should work [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 21:22:43.010: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Oct  6 21:22:45.525: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Oct  6 21:22:47.539: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737616165, loc:(*time.Location)(0x7271fa0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737616165, loc:(*time.Location)(0x7271fa0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737616165, loc:(*time.Location)(0x7271fa0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737616165, loc:(*time.Location)(0x7271fa0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Oct  6 21:22:49.546: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737616165, loc:(*time.Location)(0x7271fa0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737616165, loc:(*time.Location)(0x7271fa0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737616165, loc:(*time.Location)(0x7271fa0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737616165, loc:(*time.Location)(0x7271fa0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Oct  6 21:22:52.594: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] listing mutating webhooks should work [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Listing all of the created validation webhooks
STEP: Creating a configMap that should be mutated
STEP: Deleting the collection of validation webhooks
STEP: Creating a configMap that should not be mutated
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 21:22:53.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-6825" for this suite.
STEP: Destroying namespace "webhook-6825-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:10.229 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing mutating webhooks should work [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":278,"completed":231,"skipped":3910,"failed":0}
SSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 21:22:53.240: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W1006 21:22:54.154343       7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Oct  6 21:22:54.154: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 21:22:54.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-8743" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":278,"completed":232,"skipped":3918,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 21:22:54.181: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Oct  6 21:22:54.383: INFO: Waiting up to 5m0s for pod "downward-api-025b1c3b-31fe-4ace-860f-79d5c7486cdd" in namespace "downward-api-1099" to be "success or failure"
Oct  6 21:22:54.427: INFO: Pod "downward-api-025b1c3b-31fe-4ace-860f-79d5c7486cdd": Phase="Pending", Reason="", readiness=false. Elapsed: 43.045782ms
Oct  6 21:22:56.434: INFO: Pod "downward-api-025b1c3b-31fe-4ace-860f-79d5c7486cdd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050163529s
Oct  6 21:22:58.472: INFO: Pod "downward-api-025b1c3b-31fe-4ace-860f-79d5c7486cdd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.088416422s
STEP: Saw pod success
Oct  6 21:22:58.472: INFO: Pod "downward-api-025b1c3b-31fe-4ace-860f-79d5c7486cdd" satisfied condition "success or failure"
Oct  6 21:22:58.476: INFO: Trying to get logs from node jerma-worker pod downward-api-025b1c3b-31fe-4ace-860f-79d5c7486cdd container dapi-container: 
STEP: delete the pod
Oct  6 21:22:58.918: INFO: Waiting for pod downward-api-025b1c3b-31fe-4ace-860f-79d5c7486cdd to disappear
Oct  6 21:22:59.013: INFO: Pod downward-api-025b1c3b-31fe-4ace-860f-79d5c7486cdd no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 21:22:59.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1099" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":278,"completed":233,"skipped":3938,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 21:22:59.068: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Oct  6 21:22:59.182: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Oct  6 21:22:59.211: INFO: Waiting for terminating namespaces to be deleted...
Oct  6 21:22:59.215: INFO: 
Logging pods the kubelet thinks is on node jerma-worker before test
Oct  6 21:22:59.225: INFO: kube-proxy-knc9b from kube-system started at 2020-09-23 08:27:39 +0000 UTC (1 container statuses recorded)
Oct  6 21:22:59.225: INFO: 	Container kube-proxy ready: true, restart count 0
Oct  6 21:22:59.226: INFO: kindnet-nlsvd from kube-system started at 2020-09-23 08:27:39 +0000 UTC (1 container statuses recorded)
Oct  6 21:22:59.226: INFO: 	Container kindnet-cni ready: true, restart count 0
Oct  6 21:22:59.226: INFO: 
Logging pods the kubelet thinks is on node jerma-worker2 before test
Oct  6 21:22:59.242: INFO: kindnet-5wksn from kube-system started at 2020-09-23 08:27:38 +0000 UTC (1 container statuses recorded)
Oct  6 21:22:59.242: INFO: 	Container kindnet-cni ready: true, restart count 0
Oct  6 21:22:59.242: INFO: kube-proxy-jgndm from kube-system started at 2020-09-23 08:27:38 +0000 UTC (1 container statuses recorded)
Oct  6 21:22:59.242: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: verifying the node has the label node jerma-worker
STEP: verifying the node has the label node jerma-worker2
Oct  6 21:22:59.454: INFO: Pod kindnet-5wksn requesting resource cpu=100m on Node jerma-worker2
Oct  6 21:22:59.454: INFO: Pod kindnet-nlsvd requesting resource cpu=100m on Node jerma-worker
Oct  6 21:22:59.455: INFO: Pod kube-proxy-jgndm requesting resource cpu=0m on Node jerma-worker2
Oct  6 21:22:59.455: INFO: Pod kube-proxy-knc9b requesting resource cpu=0m on Node jerma-worker
STEP: Starting Pods to consume most of the cluster CPU.
Oct  6 21:22:59.455: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker
Oct  6 21:22:59.503: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker2
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-2530e62b-6e80-466e-8272-5f34e2715ed3.163b8423480be55e], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5269/filler-pod-2530e62b-6e80-466e-8272-5f34e2715ed3 to jerma-worker]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-2530e62b-6e80-466e-8272-5f34e2715ed3.163b842396ee1715], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-2530e62b-6e80-466e-8272-5f34e2715ed3.163b8423ec15c60b], Reason = [Created], Message = [Created container filler-pod-2530e62b-6e80-466e-8272-5f34e2715ed3]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-2530e62b-6e80-466e-8272-5f34e2715ed3.163b84240ab78ea6], Reason = [Started], Message = [Started container filler-pod-2530e62b-6e80-466e-8272-5f34e2715ed3]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-7daab3d2-8a43-4044-b25b-2285f0900c97.163b84234ee6cea1], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5269/filler-pod-7daab3d2-8a43-4044-b25b-2285f0900c97 to jerma-worker2]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-7daab3d2-8a43-4044-b25b-2285f0900c97.163b8423d4e7f7ec], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-7daab3d2-8a43-4044-b25b-2285f0900c97.163b84242b1200b1], Reason = [Created], Message = [Created container filler-pod-7daab3d2-8a43-4044-b25b-2285f0900c97]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-7daab3d2-8a43-4044-b25b-2285f0900c97.163b84243b89f644], Reason = [Started], Message = [Started container filler-pod-7daab3d2-8a43-4044-b25b-2285f0900c97]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.163b8424b824e179], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.163b8424baf14820], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.]
STEP: removing the label node off the node jerma-worker
STEP: verifying the node doesn't have the label node
STEP: removing the label node off the node jerma-worker2
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 21:23:06.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-5269" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77

• [SLOW TEST:7.751 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates resource limits of pods that are allowed to run  [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","total":278,"completed":234,"skipped":3969,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support proxy with --port 0  [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 21:23:06.820: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should support proxy with --port 0  [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: starting the proxy server
Oct  6 21:23:06.868: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 21:23:08.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5310" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0  [Conformance]","total":278,"completed":235,"skipped":3981,"failed":0}
SSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 21:23:08.096: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-6813ab5b-014d-4bd7-8ffc-fa0c8c8b5714
STEP: Creating a pod to test consume secrets
Oct  6 21:23:08.187: INFO: Waiting up to 5m0s for pod "pod-secrets-246ce2dc-6f81-4a74-a0e1-b26289e247cc" in namespace "secrets-5218" to be "success or failure"
Oct  6 21:23:08.196: INFO: Pod "pod-secrets-246ce2dc-6f81-4a74-a0e1-b26289e247cc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.723066ms
Oct  6 21:23:10.239: INFO: Pod "pod-secrets-246ce2dc-6f81-4a74-a0e1-b26289e247cc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052422043s
Oct  6 21:23:12.278: INFO: Pod "pod-secrets-246ce2dc-6f81-4a74-a0e1-b26289e247cc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0913208s
STEP: Saw pod success
Oct  6 21:23:12.278: INFO: Pod "pod-secrets-246ce2dc-6f81-4a74-a0e1-b26289e247cc" satisfied condition "success or failure"
Oct  6 21:23:12.462: INFO: Trying to get logs from node jerma-worker pod pod-secrets-246ce2dc-6f81-4a74-a0e1-b26289e247cc container secret-env-test: 
STEP: delete the pod
Oct  6 21:23:12.828: INFO: Waiting for pod pod-secrets-246ce2dc-6f81-4a74-a0e1-b26289e247cc to disappear
Oct  6 21:23:12.833: INFO: Pod pod-secrets-246ce2dc-6f81-4a74-a0e1-b26289e247cc no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 21:23:12.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5218" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":278,"completed":236,"skipped":3985,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 21:23:12.868: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-4822
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a new StatefulSet
Oct  6 21:23:13.245: INFO: Found 0 stateful pods, waiting for 3
Oct  6 21:23:23.255: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Oct  6 21:23:23.255: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Oct  6 21:23:23.255: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Oct  6 21:23:33.255: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Oct  6 21:23:33.255: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Oct  6 21:23:33.255: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Oct  6 21:23:33.274: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4822 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Oct  6 21:23:34.749: INFO: stderr: "I1006 21:23:34.605345    4164 log.go:172] (0x400011c2c0) (0x40006f2000) Create stream\nI1006 21:23:34.611889    4164 log.go:172] (0x400011c2c0) (0x40006f2000) Stream added, broadcasting: 1\nI1006 21:23:34.625252    4164 log.go:172] (0x400011c2c0) Reply frame received for 1\nI1006 21:23:34.626052    4164 log.go:172] (0x400011c2c0) (0x4000742000) Create stream\nI1006 21:23:34.626138    4164 log.go:172] (0x400011c2c0) (0x4000742000) Stream added, broadcasting: 3\nI1006 21:23:34.627976    4164 log.go:172] (0x400011c2c0) Reply frame received for 3\nI1006 21:23:34.628574    4164 log.go:172] (0x400011c2c0) (0x40006f20a0) Create stream\nI1006 21:23:34.628691    4164 log.go:172] (0x400011c2c0) (0x40006f20a0) Stream added, broadcasting: 5\nI1006 21:23:34.630761    4164 log.go:172] (0x400011c2c0) Reply frame received for 5\nI1006 21:23:34.686798    4164 log.go:172] (0x400011c2c0) Data frame received for 5\nI1006 21:23:34.687161    4164 log.go:172] (0x40006f20a0) (5) Data frame handling\nI1006 21:23:34.688052    4164 log.go:172] (0x40006f20a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI1006 21:23:34.728625    4164 log.go:172] (0x400011c2c0) Data frame received for 5\nI1006 21:23:34.728768    4164 log.go:172] (0x40006f20a0) (5) Data frame handling\nI1006 21:23:34.729380    4164 log.go:172] (0x400011c2c0) Data frame received for 3\nI1006 21:23:34.729478    4164 log.go:172] (0x4000742000) (3) Data frame handling\nI1006 21:23:34.729575    4164 log.go:172] (0x4000742000) (3) Data frame sent\nI1006 21:23:34.729658    4164 log.go:172] (0x400011c2c0) Data frame received for 3\nI1006 21:23:34.729732    4164 log.go:172] (0x4000742000) (3) Data frame handling\nI1006 21:23:34.730594    4164 log.go:172] (0x400011c2c0) Data frame received for 1\nI1006 21:23:34.730706    4164 log.go:172] (0x40006f2000) (1) Data frame handling\nI1006 21:23:34.730796    4164 log.go:172] (0x40006f2000) (1) Data frame sent\nI1006 21:23:34.731350    4164 log.go:172] (0x400011c2c0) (0x40006f2000) Stream removed, broadcasting: 1\nI1006 21:23:34.734039    4164 log.go:172] (0x400011c2c0) Go away received\nI1006 21:23:34.739968    4164 log.go:172] (0x400011c2c0) (0x40006f2000) Stream removed, broadcasting: 1\nI1006 21:23:34.740438    4164 log.go:172] (0x400011c2c0) (0x4000742000) Stream removed, broadcasting: 3\nI1006 21:23:34.740755    4164 log.go:172] (0x400011c2c0) (0x40006f20a0) Stream removed, broadcasting: 5\n"
Oct  6 21:23:34.750: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Oct  6 21:23:34.751: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine
Oct  6 21:23:34.845: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Oct  6 21:23:44.911: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4822 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct  6 21:23:46.405: INFO: stderr: "I1006 21:23:46.274415    4187 log.go:172] (0x4000a4e0b0) (0x40007d9cc0) Create stream\nI1006 21:23:46.277475    4187 log.go:172] (0x4000a4e0b0) (0x40007d9cc0) Stream added, broadcasting: 1\nI1006 21:23:46.287778    4187 log.go:172] (0x4000a4e0b0) Reply frame received for 1\nI1006 21:23:46.288747    4187 log.go:172] (0x4000a4e0b0) (0x4000b4c000) Create stream\nI1006 21:23:46.288959    4187 log.go:172] (0x4000a4e0b0) (0x4000b4c000) Stream added, broadcasting: 3\nI1006 21:23:46.290931    4187 log.go:172] (0x4000a4e0b0) Reply frame received for 3\nI1006 21:23:46.291390    4187 log.go:172] (0x4000a4e0b0) (0x40007d9d60) Create stream\nI1006 21:23:46.291482    4187 log.go:172] (0x4000a4e0b0) (0x40007d9d60) Stream added, broadcasting: 5\nI1006 21:23:46.293058    4187 log.go:172] (0x4000a4e0b0) Reply frame received for 5\nI1006 21:23:46.384236    4187 log.go:172] (0x4000a4e0b0) Data frame received for 5\nI1006 21:23:46.384571    4187 log.go:172] (0x40007d9d60) (5) Data frame handling\nI1006 21:23:46.384974    4187 log.go:172] (0x4000a4e0b0) Data frame received for 3\nI1006 21:23:46.385115    4187 log.go:172] (0x4000b4c000) (3) Data frame handling\nI1006 21:23:46.385269    4187 log.go:172] (0x4000b4c000) (3) Data frame sent\nI1006 21:23:46.385488    4187 log.go:172] (0x40007d9d60) (5) Data frame sent\nI1006 21:23:46.385792    4187 log.go:172] (0x4000a4e0b0) Data frame received for 1\nI1006 21:23:46.385977    4187 log.go:172] (0x40007d9cc0) (1) Data frame handling\nI1006 21:23:46.386102    4187 log.go:172] (0x4000a4e0b0) Data frame received for 3\nI1006 21:23:46.386238    4187 log.go:172] (0x4000b4c000) (3) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI1006 21:23:46.386369    4187 log.go:172] (0x4000a4e0b0) Data frame received for 5\nI1006 21:23:46.386491    4187 log.go:172] (0x40007d9d60) (5) Data frame handling\nI1006 21:23:46.386659    4187 log.go:172] (0x40007d9cc0) (1) Data frame sent\nI1006 21:23:46.388716    4187 log.go:172] (0x4000a4e0b0) (0x40007d9cc0) Stream removed, broadcasting: 1\nI1006 21:23:46.392545    4187 log.go:172] (0x4000a4e0b0) Go away received\nI1006 21:23:46.394950    4187 log.go:172] (0x4000a4e0b0) (0x40007d9cc0) Stream removed, broadcasting: 1\nI1006 21:23:46.395826    4187 log.go:172] (0x4000a4e0b0) (0x4000b4c000) Stream removed, broadcasting: 3\nI1006 21:23:46.396766    4187 log.go:172] (0x4000a4e0b0) (0x40007d9d60) Stream removed, broadcasting: 5\n"
Oct  6 21:23:46.405: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Oct  6 21:23:46.406: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Oct  6 21:23:56.440: INFO: Waiting for StatefulSet statefulset-4822/ss2 to complete update
Oct  6 21:23:56.441: INFO: Waiting for Pod statefulset-4822/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Oct  6 21:23:56.441: INFO: Waiting for Pod statefulset-4822/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Oct  6 21:24:06.453: INFO: Waiting for StatefulSet statefulset-4822/ss2 to complete update
Oct  6 21:24:06.453: INFO: Waiting for Pod statefulset-4822/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Oct  6 21:24:16.453: INFO: Waiting for StatefulSet statefulset-4822/ss2 to complete update
STEP: Rolling back to a previous revision
Oct  6 21:24:26.459: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4822 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Oct  6 21:24:27.959: INFO: stderr: "I1006 21:24:27.824280    4209 log.go:172] (0x4000a820b0) (0x4000af4140) Create stream\nI1006 21:24:27.827114    4209 log.go:172] (0x4000a820b0) (0x4000af4140) Stream added, broadcasting: 1\nI1006 21:24:27.836035    4209 log.go:172] (0x4000a820b0) Reply frame received for 1\nI1006 21:24:27.836565    4209 log.go:172] (0x4000a820b0) (0x40007e0820) Create stream\nI1006 21:24:27.836623    4209 log.go:172] (0x4000a820b0) (0x40007e0820) Stream added, broadcasting: 3\nI1006 21:24:27.838044    4209 log.go:172] (0x4000a820b0) Reply frame received for 3\nI1006 21:24:27.838307    4209 log.go:172] (0x4000a820b0) (0x4000af4280) Create stream\nI1006 21:24:27.838365    4209 log.go:172] (0x4000a820b0) (0x4000af4280) Stream added, broadcasting: 5\nI1006 21:24:27.839430    4209 log.go:172] (0x4000a820b0) Reply frame received for 5\nI1006 21:24:27.914603    4209 log.go:172] (0x4000a820b0) Data frame received for 5\nI1006 21:24:27.914961    4209 log.go:172] (0x4000af4280) (5) Data frame handling\nI1006 21:24:27.915880    4209 log.go:172] (0x4000af4280) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI1006 21:24:27.938272    4209 log.go:172] (0x4000a820b0) Data frame received for 3\nI1006 21:24:27.938490    4209 log.go:172] (0x40007e0820) (3) Data frame handling\nI1006 21:24:27.938613    4209 log.go:172] (0x4000a820b0) Data frame received for 5\nI1006 21:24:27.938716    4209 log.go:172] (0x4000af4280) (5) Data frame handling\nI1006 21:24:27.938911    4209 log.go:172] (0x40007e0820) (3) Data frame sent\nI1006 21:24:27.939091    4209 log.go:172] (0x4000a820b0) Data frame received for 3\nI1006 21:24:27.939216    4209 log.go:172] (0x40007e0820) (3) Data frame handling\nI1006 21:24:27.940685    4209 log.go:172] (0x4000a820b0) Data frame received for 1\nI1006 21:24:27.940825    4209 log.go:172] (0x4000af4140) (1) Data frame handling\nI1006 21:24:27.941158    4209 log.go:172] (0x4000af4140) (1) Data frame sent\nI1006 21:24:27.942404    4209 log.go:172] (0x4000a820b0) (0x4000af4140) Stream removed, broadcasting: 1\nI1006 21:24:27.947732    4209 log.go:172] (0x4000a820b0) Go away received\nI1006 21:24:27.950451    4209 log.go:172] (0x4000a820b0) (0x4000af4140) Stream removed, broadcasting: 1\nI1006 21:24:27.950939    4209 log.go:172] (0x4000a820b0) (0x40007e0820) Stream removed, broadcasting: 3\nI1006 21:24:27.951619    4209 log.go:172] (0x4000a820b0) (0x4000af4280) Stream removed, broadcasting: 5\n"
Oct  6 21:24:27.960: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Oct  6 21:24:27.960: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Oct  6 21:24:38.043: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Oct  6 21:24:48.135: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4822 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Oct  6 21:24:49.700: INFO: stderr: "I1006 21:24:49.569226    4232 log.go:172] (0x40003ecd10) (0x40006581e0) Create stream\nI1006 21:24:49.571796    4232 log.go:172] (0x40003ecd10) (0x40006581e0) Stream added, broadcasting: 1\nI1006 21:24:49.583095    4232 log.go:172] (0x40003ecd10) Reply frame received for 1\nI1006 21:24:49.583668    4232 log.go:172] (0x40003ecd10) (0x4000658280) Create stream\nI1006 21:24:49.583725    4232 log.go:172] (0x40003ecd10) (0x4000658280) Stream added, broadcasting: 3\nI1006 21:24:49.585328    4232 log.go:172] (0x40003ecd10) Reply frame received for 3\nI1006 21:24:49.585693    4232 log.go:172] (0x40003ecd10) (0x4000790000) Create stream\nI1006 21:24:49.585774    4232 log.go:172] (0x40003ecd10) (0x4000790000) Stream added, broadcasting: 5\nI1006 21:24:49.587180    4232 log.go:172] (0x40003ecd10) Reply frame received for 5\nI1006 21:24:49.678155    4232 log.go:172] (0x40003ecd10) Data frame received for 5\nI1006 21:24:49.678597    4232 log.go:172] (0x40003ecd10) Data frame received for 3\nI1006 21:24:49.678743    4232 log.go:172] (0x4000658280) (3) Data frame handling\nI1006 21:24:49.679008    4232 log.go:172] (0x40003ecd10) Data frame received for 1\nI1006 21:24:49.679155    4232 log.go:172] (0x40006581e0) (1) Data frame handling\nI1006 21:24:49.679479    4232 log.go:172] (0x4000790000) (5) Data frame handling\nI1006 21:24:49.680025    4232 log.go:172] (0x4000790000) (5) Data frame sent\nI1006 21:24:49.680217    4232 log.go:172] (0x4000658280) (3) Data frame sent\nI1006 21:24:49.680496    4232 log.go:172] (0x40006581e0) (1) Data frame sent\nI1006 21:24:49.680739    4232 log.go:172] (0x40003ecd10) Data frame received for 5\nI1006 21:24:49.680991    4232 log.go:172] (0x4000790000) (5) Data frame handling\nI1006 21:24:49.681612    4232 log.go:172] (0x40003ecd10) Data frame received for 3\nI1006 21:24:49.681717    4232 log.go:172] (0x4000658280) (3) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI1006 21:24:49.685058    4232 log.go:172] (0x40003ecd10) (0x40006581e0) Stream removed, broadcasting: 1\nI1006 21:24:49.687693    4232 log.go:172] (0x40003ecd10) Go away received\nI1006 21:24:49.691863    4232 log.go:172] (0x40003ecd10) (0x40006581e0) Stream removed, broadcasting: 1\nI1006 21:24:49.692328    4232 log.go:172] (0x40003ecd10) (0x4000658280) Stream removed, broadcasting: 3\nI1006 21:24:49.692593    4232 log.go:172] (0x40003ecd10) (0x4000790000) Stream removed, broadcasting: 5\n"
Oct  6 21:24:49.701: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Oct  6 21:24:49.701: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Oct  6 21:25:09.738: INFO: Waiting for StatefulSet statefulset-4822/ss2 to complete update
Oct  6 21:25:09.738: INFO: Waiting for Pod statefulset-4822/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Oct  6 21:25:19.751: INFO: Deleting all statefulset in ns statefulset-4822
Oct  6 21:25:19.755: INFO: Scaling statefulset ss2 to 0
Oct  6 21:25:39.775: INFO: Waiting for statefulset status.replicas updated to 0
Oct  6 21:25:39.780: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 21:25:39.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-4822" for this suite.

• [SLOW TEST:146.961 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    should perform rolling updates and roll backs of template modifications [Conformance]
    /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":278,"completed":237,"skipped":4019,"failed":0}
S
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicationController
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 21:25:39.831: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Oct  6 21:25:39.908: INFO: Pod name pod-release: Found 0 pods out of 1
Oct  6 21:25:44.918: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 21:25:44.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-8119" for this suite.

• [SLOW TEST:5.224 seconds]
[sig-apps] ReplicationController
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should release no longer matching pods [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":278,"completed":238,"skipped":4020,"failed":0}
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 21:25:45.055: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0666 on tmpfs
Oct  6 21:25:45.180: INFO: Waiting up to 5m0s for pod "pod-a2bc7851-87db-4c85-b689-4ad1dc1feb85" in namespace "emptydir-6727" to be "success or failure"
Oct  6 21:25:45.193: INFO: Pod "pod-a2bc7851-87db-4c85-b689-4ad1dc1feb85": Phase="Pending", Reason="", readiness=false. Elapsed: 12.181039ms
Oct  6 21:25:47.339: INFO: Pod "pod-a2bc7851-87db-4c85-b689-4ad1dc1feb85": Phase="Pending", Reason="", readiness=false. Elapsed: 2.157880096s
Oct  6 21:25:49.347: INFO: Pod "pod-a2bc7851-87db-4c85-b689-4ad1dc1feb85": Phase="Running", Reason="", readiness=true. Elapsed: 4.166459542s
Oct  6 21:25:51.386: INFO: Pod "pod-a2bc7851-87db-4c85-b689-4ad1dc1feb85": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.205030877s
STEP: Saw pod success
Oct  6 21:25:51.386: INFO: Pod "pod-a2bc7851-87db-4c85-b689-4ad1dc1feb85" satisfied condition "success or failure"
Oct  6 21:25:51.391: INFO: Trying to get logs from node jerma-worker pod pod-a2bc7851-87db-4c85-b689-4ad1dc1feb85 container test-container: 
STEP: delete the pod
Oct  6 21:25:51.457: INFO: Waiting for pod pod-a2bc7851-87db-4c85-b689-4ad1dc1feb85 to disappear
Oct  6 21:25:51.461: INFO: Pod pod-a2bc7851-87db-4c85-b689-4ad1dc1feb85 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 21:25:51.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6727" for this suite.

• [SLOW TEST:6.424 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":239,"skipped":4022,"failed":0}
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a validating webhook should work [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 21:25:51.480: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Oct  6 21:25:55.344: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Oct  6 21:25:57.623: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737616355, loc:(*time.Location)(0x7271fa0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737616355, loc:(*time.Location)(0x7271fa0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737616355, loc:(*time.Location)(0x7271fa0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737616355, loc:(*time.Location)(0x7271fa0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Oct  6 21:26:00.659: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] patching/updating a validating webhook should work [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a validating webhook configuration
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Updating a validating webhook configuration's rules to not include the create operation
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Patching a validating webhook configuration's rules to include the create operation
STEP: Creating a configMap that does not comply to the validation webhook rules
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 21:26:00.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-1889" for this suite.
STEP: Destroying namespace "webhook-1889-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:9.595 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a validating webhook should work [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":278,"completed":240,"skipped":4022,"failed":0}
SSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields at the schema root [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 21:26:01.078: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD preserving unknown fields at the schema root [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Oct  6 21:26:01.212: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Oct  6 21:26:20.280: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2558 create -f -'
Oct  6 21:26:24.824: INFO: stderr: ""
Oct  6 21:26:24.824: INFO: stdout: "e2e-test-crd-publish-openapi-1506-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n"
Oct  6 21:26:24.825: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2558 delete e2e-test-crd-publish-openapi-1506-crds test-cr'
Oct  6 21:26:26.066: INFO: stderr: ""
Oct  6 21:26:26.066: INFO: stdout: "e2e-test-crd-publish-openapi-1506-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n"
Oct  6 21:26:26.067: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2558 apply -f -'
Oct  6 21:26:27.663: INFO: stderr: ""
Oct  6 21:26:27.663: INFO: stdout: "e2e-test-crd-publish-openapi-1506-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n"
Oct  6 21:26:27.664: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2558 delete e2e-test-crd-publish-openapi-1506-crds test-cr'
Oct  6 21:26:28.934: INFO: stderr: ""
Oct  6 21:26:28.934: INFO: stdout: "e2e-test-crd-publish-openapi-1506-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR
Oct  6 21:26:28.935: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1506-crds'
Oct  6 21:26:30.468: INFO: stderr: ""
Oct  6 21:26:30.468: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-1506-crd\nVERSION:  crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n     \n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 21:26:49.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-2558" for this suite.

• [SLOW TEST:48.107 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields at the schema root [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":278,"completed":241,"skipped":4030,"failed":0}
SSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 21:26:49.186: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Kubectl run pod
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1760
[It] should create a pod from an image when restart is Never  [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Oct  6 21:26:49.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-2399'
Oct  6 21:26:50.548: INFO: stderr: ""
Oct  6 21:26:50.548: INFO: stdout: "pod/e2e-test-httpd-pod created\n"
STEP: verifying the pod e2e-test-httpd-pod was created
[AfterEach] Kubectl run pod
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1765
Oct  6 21:26:50.553: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-2399'
Oct  6 21:27:04.341: INFO: stderr: ""
Oct  6 21:27:04.341: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 21:27:04.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2399" for this suite.

• [SLOW TEST:15.170 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run pod
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1756
    should create a pod from an image when restart is Never  [Conformance]
    /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never  [Conformance]","total":278,"completed":242,"skipped":4037,"failed":0}
SSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 21:27:04.358: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Oct  6 21:27:04.450: INFO: Waiting up to 5m0s for pod "downwardapi-volume-28e056ad-9fc9-4f1c-9f57-6eec9da97581" in namespace "projected-742" to be "success or failure"
Oct  6 21:27:04.458: INFO: Pod "downwardapi-volume-28e056ad-9fc9-4f1c-9f57-6eec9da97581": Phase="Pending", Reason="", readiness=false. Elapsed: 8.10386ms
Oct  6 21:27:06.465: INFO: Pod "downwardapi-volume-28e056ad-9fc9-4f1c-9f57-6eec9da97581": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015246913s
Oct  6 21:27:08.473: INFO: Pod "downwardapi-volume-28e056ad-9fc9-4f1c-9f57-6eec9da97581": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022620935s
STEP: Saw pod success
Oct  6 21:27:08.473: INFO: Pod "downwardapi-volume-28e056ad-9fc9-4f1c-9f57-6eec9da97581" satisfied condition "success or failure"
Oct  6 21:27:08.478: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-28e056ad-9fc9-4f1c-9f57-6eec9da97581 container client-container: 
STEP: delete the pod
Oct  6 21:27:08.534: INFO: Waiting for pod downwardapi-volume-28e056ad-9fc9-4f1c-9f57-6eec9da97581 to disappear
Oct  6 21:27:08.547: INFO: Pod downwardapi-volume-28e056ad-9fc9-4f1c-9f57-6eec9da97581 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 21:27:08.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-742" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":243,"skipped":4043,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 21:27:08.562: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Oct  6 21:27:08.694: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e3513482-21d1-41bf-8ee8-7aa8ad16660b" in namespace "downward-api-2011" to be "success or failure"
Oct  6 21:27:08.720: INFO: Pod "downwardapi-volume-e3513482-21d1-41bf-8ee8-7aa8ad16660b": Phase="Pending", Reason="", readiness=false. Elapsed: 25.833651ms
Oct  6 21:27:10.727: INFO: Pod "downwardapi-volume-e3513482-21d1-41bf-8ee8-7aa8ad16660b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032767104s
Oct  6 21:27:12.813: INFO: Pod "downwardapi-volume-e3513482-21d1-41bf-8ee8-7aa8ad16660b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.119083521s
STEP: Saw pod success
Oct  6 21:27:12.813: INFO: Pod "downwardapi-volume-e3513482-21d1-41bf-8ee8-7aa8ad16660b" satisfied condition "success or failure"
Oct  6 21:27:12.818: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-e3513482-21d1-41bf-8ee8-7aa8ad16660b container client-container: 
STEP: delete the pod
Oct  6 21:27:12.938: INFO: Waiting for pod downwardapi-volume-e3513482-21d1-41bf-8ee8-7aa8ad16660b to disappear
Oct  6 21:27:12.942: INFO: Pod downwardapi-volume-e3513482-21d1-41bf-8ee8-7aa8ad16660b no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 21:27:12.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2011" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":244,"skipped":4053,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 21:27:12.956: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W1006 21:27:53.452339       7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Oct  6 21:27:53.452: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 21:27:53.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-9436" for this suite.

• [SLOW TEST:40.510 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":278,"completed":245,"skipped":4059,"failed":0}
SSSSSSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 21:27:53.469: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Oct  6 21:27:53.536: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 21:27:57.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-8098" for this suite.
•{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":278,"completed":246,"skipped":4067,"failed":0}
SSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 21:27:57.629: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Oct  6 21:28:07.945: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Oct  6 21:28:07.951: INFO: Pod pod-with-poststart-http-hook still exists
Oct  6 21:28:09.951: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Oct  6 21:28:09.959: INFO: Pod pod-with-poststart-http-hook still exists
Oct  6 21:28:11.951: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Oct  6 21:28:11.958: INFO: Pod pod-with-poststart-http-hook still exists
Oct  6 21:28:13.951: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Oct  6 21:28:13.957: INFO: Pod pod-with-poststart-http-hook still exists
Oct  6 21:28:15.951: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Oct  6 21:28:15.957: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 21:28:15.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-7002" for this suite.

• [SLOW TEST:18.343 seconds]
[k8s.io] Container Lifecycle Hook
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when create a pod with lifecycle hook
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":278,"completed":247,"skipped":4071,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of different groups [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 21:28:15.976: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of different groups [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation
Oct  6 21:28:16.039: INFO: >>> kubeConfig: /root/.kube/config
Oct  6 21:28:34.943: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 21:29:32.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-5332" for this suite.

• [SLOW TEST:76.412 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of different groups [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":278,"completed":248,"skipped":4103,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 21:29:32.390: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Oct  6 21:29:32.515: INFO: Waiting up to 5m0s for pod "downwardapi-volume-62be59a9-5655-42d9-90b3-e926058623ee" in namespace "downward-api-6076" to be "success or failure"
Oct  6 21:29:32.527: INFO: Pod "downwardapi-volume-62be59a9-5655-42d9-90b3-e926058623ee": Phase="Pending", Reason="", readiness=false. Elapsed: 12.213372ms
Oct  6 21:29:34.587: INFO: Pod "downwardapi-volume-62be59a9-5655-42d9-90b3-e926058623ee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072347525s
Oct  6 21:29:36.595: INFO: Pod "downwardapi-volume-62be59a9-5655-42d9-90b3-e926058623ee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.079668816s
STEP: Saw pod success
Oct  6 21:29:36.595: INFO: Pod "downwardapi-volume-62be59a9-5655-42d9-90b3-e926058623ee" satisfied condition "success or failure"
Oct  6 21:29:36.602: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-62be59a9-5655-42d9-90b3-e926058623ee container client-container: 
STEP: delete the pod
Oct  6 21:29:36.680: INFO: Waiting for pod downwardapi-volume-62be59a9-5655-42d9-90b3-e926058623ee to disappear
Oct  6 21:29:36.755: INFO: Pod downwardapi-volume-62be59a9-5655-42d9-90b3-e926058623ee no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 21:29:36.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6076" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":249,"skipped":4119,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD without validation schema [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 21:29:36.794: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD without validation schema [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Oct  6 21:29:36.866: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Oct  6 21:29:55.821: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8097 create -f -'
Oct  6 21:30:00.342: INFO: stderr: ""
Oct  6 21:30:00.343: INFO: stdout: "e2e-test-crd-publish-openapi-9654-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n"
Oct  6 21:30:00.343: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8097 delete e2e-test-crd-publish-openapi-9654-crds test-cr'
Oct  6 21:30:01.587: INFO: stderr: ""
Oct  6 21:30:01.587: INFO: stdout: "e2e-test-crd-publish-openapi-9654-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n"
Oct  6 21:30:01.587: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8097 apply -f -'
Oct  6 21:30:03.409: INFO: stderr: ""
Oct  6 21:30:03.409: INFO: stdout: "e2e-test-crd-publish-openapi-9654-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n"
Oct  6 21:30:03.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8097 delete e2e-test-crd-publish-openapi-9654-crds test-cr'
Oct  6 21:30:04.656: INFO: stderr: ""
Oct  6 21:30:04.656: INFO: stdout: "e2e-test-crd-publish-openapi-9654-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR without validation schema
Oct  6 21:30:04.657: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9654-crds'
Oct  6 21:30:06.172: INFO: stderr: ""
Oct  6 21:30:06.172: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-9654-crd\nVERSION:  crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n     \n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 21:30:15.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-8097" for this suite.

• [SLOW TEST:39.159 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD without validation schema [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":278,"completed":250,"skipped":4133,"failed":0}
SSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run deployment 
  should create a deployment from an image [Deprecated] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 21:30:15.954: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Kubectl run deployment
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1629
[It] should create a deployment from an image [Deprecated] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Oct  6 21:30:16.031: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --generator=deployment/apps.v1 --namespace=kubectl-3270'
Oct  6 21:30:17.324: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Oct  6 21:30:17.324: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n"
STEP: verifying the deployment e2e-test-httpd-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-httpd-deployment was created
[AfterEach] Kubectl run deployment
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1634
Oct  6 21:30:19.433: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-3270'
Oct  6 21:30:20.701: INFO: stderr: ""
Oct  6 21:30:20.701: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 21:30:20.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3270" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Deprecated] [Conformance]","total":278,"completed":251,"skipped":4138,"failed":0}
S
------------------------------
[sig-network] Services 
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 21:30:20.718: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to change the type from ClusterIP to ExternalName [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-2123
STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service
STEP: creating service externalsvc in namespace services-2123
STEP: creating replication controller externalsvc in namespace services-2123
I1006 21:30:20.915326       7 runners.go:189] Created replication controller with name: externalsvc, namespace: services-2123, replica count: 2
I1006 21:30:23.966707       7 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1006 21:30:26.967365       7 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: changing the ClusterIP service to type=ExternalName
Oct  6 21:30:27.021: INFO: Creating new exec pod
Oct  6 21:30:31.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-2123 execpodxnqvg -- /bin/sh -x -c nslookup clusterip-service'
Oct  6 21:30:32.579: INFO: stderr: "I1006 21:30:32.432807    4596 log.go:172] (0x4000107600) (0x40007e1ea0) Create stream\nI1006 21:30:32.438627    4596 log.go:172] (0x4000107600) (0x40007e1ea0) Stream added, broadcasting: 1\nI1006 21:30:32.451405    4596 log.go:172] (0x4000107600) Reply frame received for 1\nI1006 21:30:32.452415    4596 log.go:172] (0x4000107600) (0x4000808000) Create stream\nI1006 21:30:32.452504    4596 log.go:172] (0x4000107600) (0x4000808000) Stream added, broadcasting: 3\nI1006 21:30:32.454919    4596 log.go:172] (0x4000107600) Reply frame received for 3\nI1006 21:30:32.455488    4596 log.go:172] (0x4000107600) (0x4000832000) Create stream\nI1006 21:30:32.455605    4596 log.go:172] (0x4000107600) (0x4000832000) Stream added, broadcasting: 5\nI1006 21:30:32.457779    4596 log.go:172] (0x4000107600) Reply frame received for 5\nI1006 21:30:32.551153    4596 log.go:172] (0x4000107600) Data frame received for 5\nI1006 21:30:32.551434    4596 log.go:172] (0x4000832000) (5) Data frame handling\nI1006 21:30:32.552065    4596 log.go:172] (0x4000832000) (5) Data frame sent\n+ nslookup clusterip-service\nI1006 21:30:32.560179    4596 log.go:172] (0x4000107600) Data frame received for 3\nI1006 21:30:32.560311    4596 log.go:172] (0x4000808000) (3) Data frame handling\nI1006 21:30:32.560480    4596 log.go:172] (0x4000808000) (3) Data frame sent\nI1006 21:30:32.561361    4596 log.go:172] (0x4000107600) Data frame received for 3\nI1006 21:30:32.561483    4596 log.go:172] (0x4000808000) (3) Data frame handling\nI1006 21:30:32.561617    4596 log.go:172] (0x4000808000) (3) Data frame sent\nI1006 21:30:32.561865    4596 log.go:172] (0x4000107600) Data frame received for 3\nI1006 21:30:32.562058    4596 log.go:172] (0x4000107600) Data frame received for 5\nI1006 21:30:32.562255    4596 log.go:172] (0x4000832000) (5) Data frame handling\nI1006 21:30:32.562394    4596 log.go:172] (0x4000808000) (3) Data frame handling\nI1006 21:30:32.563699    4596 log.go:172] (0x4000107600) Data frame received for 1\nI1006 21:30:32.563789    4596 log.go:172] (0x40007e1ea0) (1) Data frame handling\nI1006 21:30:32.563872    4596 log.go:172] (0x40007e1ea0) (1) Data frame sent\nI1006 21:30:32.565614    4596 log.go:172] (0x4000107600) (0x40007e1ea0) Stream removed, broadcasting: 1\nI1006 21:30:32.568172    4596 log.go:172] (0x4000107600) Go away received\nI1006 21:30:32.572112    4596 log.go:172] (0x4000107600) (0x40007e1ea0) Stream removed, broadcasting: 1\nI1006 21:30:32.572532    4596 log.go:172] (0x4000107600) (0x4000808000) Stream removed, broadcasting: 3\nI1006 21:30:32.572781    4596 log.go:172] (0x4000107600) (0x4000832000) Stream removed, broadcasting: 5\n"
Oct  6 21:30:32.581: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-2123.svc.cluster.local\tcanonical name = externalsvc.services-2123.svc.cluster.local.\nName:\texternalsvc.services-2123.svc.cluster.local\nAddress: 10.104.92.250\n\n"
STEP: deleting ReplicationController externalsvc in namespace services-2123, will wait for the garbage collector to delete the pods
Oct  6 21:30:32.652: INFO: Deleting ReplicationController externalsvc took: 13.602109ms
Oct  6 21:30:32.953: INFO: Terminating ReplicationController externalsvc pods took: 300.935132ms
Oct  6 21:30:44.454: INFO: Cleaning up the ClusterIP to ExternalName test service
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 21:30:44.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-2123" for this suite.
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:23.784 seconds]
[sig-network] Services
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":278,"completed":252,"skipped":4139,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 21:30:44.504: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Oct  6 21:30:44.568: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bbb9c1d2-6426-429c-aced-b3a3ab99cb63" in namespace "downward-api-3556" to be "success or failure"
Oct  6 21:30:44.613: INFO: Pod "downwardapi-volume-bbb9c1d2-6426-429c-aced-b3a3ab99cb63": Phase="Pending", Reason="", readiness=false. Elapsed: 44.682484ms
Oct  6 21:30:46.679: INFO: Pod "downwardapi-volume-bbb9c1d2-6426-429c-aced-b3a3ab99cb63": Phase="Pending", Reason="", readiness=false. Elapsed: 2.110782643s
Oct  6 21:30:48.686: INFO: Pod "downwardapi-volume-bbb9c1d2-6426-429c-aced-b3a3ab99cb63": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.117708934s
STEP: Saw pod success
Oct  6 21:30:48.686: INFO: Pod "downwardapi-volume-bbb9c1d2-6426-429c-aced-b3a3ab99cb63" satisfied condition "success or failure"
Oct  6 21:30:48.690: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-bbb9c1d2-6426-429c-aced-b3a3ab99cb63 container client-container: 
STEP: delete the pod
Oct  6 21:30:48.763: INFO: Waiting for pod downwardapi-volume-bbb9c1d2-6426-429c-aced-b3a3ab99cb63 to disappear
Oct  6 21:30:48.767: INFO: Pod downwardapi-volume-bbb9c1d2-6426-429c-aced-b3a3ab99cb63 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 21:30:48.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3556" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":253,"skipped":4160,"failed":0}
S
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 21:30:48.779: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to change the type from ExternalName to ClusterIP [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a service externalname-service with the type=ExternalName in namespace services-6232
STEP: changing the ExternalName service to type=ClusterIP
STEP: creating replication controller externalname-service in namespace services-6232
I1006 21:30:48.931776       7 runners.go:189] Created replication controller with name: externalname-service, namespace: services-6232, replica count: 2
I1006 21:30:51.983048       7 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1006 21:30:54.983670       7 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Oct  6 21:30:54.983: INFO: Creating new exec pod
Oct  6 21:31:00.017: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-6232 execpodqclmm -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Oct  6 21:31:01.446: INFO: stderr: "I1006 21:31:01.327092    4619 log.go:172] (0x4000aae000) (0x40009d6000) Create stream\nI1006 21:31:01.329491    4619 log.go:172] (0x4000aae000) (0x40009d6000) Stream added, broadcasting: 1\nI1006 21:31:01.337783    4619 log.go:172] (0x4000aae000) Reply frame received for 1\nI1006 21:31:01.338321    4619 log.go:172] (0x4000aae000) (0x4000a20000) Create stream\nI1006 21:31:01.338375    4619 log.go:172] (0x4000aae000) (0x4000a20000) Stream added, broadcasting: 3\nI1006 21:31:01.345247    4619 log.go:172] (0x4000aae000) Reply frame received for 3\nI1006 21:31:01.345754    4619 log.go:172] (0x4000aae000) (0x40009d60a0) Create stream\nI1006 21:31:01.345858    4619 log.go:172] (0x4000aae000) (0x40009d60a0) Stream added, broadcasting: 5\nI1006 21:31:01.347646    4619 log.go:172] (0x4000aae000) Reply frame received for 5\nI1006 21:31:01.426308    4619 log.go:172] (0x4000aae000) Data frame received for 5\nI1006 21:31:01.426688    4619 log.go:172] (0x4000aae000) Data frame received for 3\nI1006 21:31:01.426804    4619 log.go:172] (0x4000a20000) (3) Data frame handling\nI1006 21:31:01.426902    4619 log.go:172] (0x40009d60a0) (5) Data frame handling\nI1006 21:31:01.427826    4619 log.go:172] (0x4000aae000) Data frame received for 1\nI1006 21:31:01.427959    4619 log.go:172] (0x40009d6000) (1) Data frame handling\n+ nc -zv -t -w 2 externalname-service 80\nI1006 21:31:01.429106    4619 log.go:172] (0x40009d60a0) (5) Data frame sent\nI1006 21:31:01.429373    4619 log.go:172] (0x40009d6000) (1) Data frame sent\nI1006 21:31:01.429807    4619 log.go:172] (0x4000aae000) Data frame received for 5\nI1006 21:31:01.429930    4619 log.go:172] (0x40009d60a0) (5) Data frame handling\nI1006 21:31:01.430041    4619 log.go:172] (0x40009d60a0) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI1006 21:31:01.430185    4619 log.go:172] (0x4000aae000) Data frame received for 5\nI1006 21:31:01.430316    4619 log.go:172] (0x40009d60a0) (5) Data frame handling\nI1006 21:31:01.431060    4619 log.go:172] (0x4000aae000) (0x40009d6000) Stream removed, broadcasting: 1\nI1006 21:31:01.434050    4619 log.go:172] (0x4000aae000) Go away received\nI1006 21:31:01.438854    4619 log.go:172] (0x4000aae000) (0x40009d6000) Stream removed, broadcasting: 1\nI1006 21:31:01.439210    4619 log.go:172] (0x4000aae000) (0x4000a20000) Stream removed, broadcasting: 3\nI1006 21:31:01.439453    4619 log.go:172] (0x4000aae000) (0x40009d60a0) Stream removed, broadcasting: 5\n"
Oct  6 21:31:01.447: INFO: stdout: ""
Oct  6 21:31:01.453: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-6232 execpodqclmm -- /bin/sh -x -c nc -zv -t -w 2 10.101.13.73 80'
Oct  6 21:31:02.962: INFO: stderr: "I1006 21:31:02.832471    4640 log.go:172] (0x40000140b0) (0x4000a7a140) Create stream\nI1006 21:31:02.835562    4640 log.go:172] (0x40000140b0) (0x4000a7a140) Stream added, broadcasting: 1\nI1006 21:31:02.847421    4640 log.go:172] (0x40000140b0) Reply frame received for 1\nI1006 21:31:02.848638    4640 log.go:172] (0x40000140b0) (0x4000a7a320) Create stream\nI1006 21:31:02.848796    4640 log.go:172] (0x40000140b0) (0x4000a7a320) Stream added, broadcasting: 3\nI1006 21:31:02.850487    4640 log.go:172] (0x40000140b0) Reply frame received for 3\nI1006 21:31:02.850949    4640 log.go:172] (0x40000140b0) (0x4000a7a3c0) Create stream\nI1006 21:31:02.851058    4640 log.go:172] (0x40000140b0) (0x4000a7a3c0) Stream added, broadcasting: 5\nI1006 21:31:02.852510    4640 log.go:172] (0x40000140b0) Reply frame received for 5\nI1006 21:31:02.943049    4640 log.go:172] (0x40000140b0) Data frame received for 3\nI1006 21:31:02.943478    4640 log.go:172] (0x40000140b0) Data frame received for 5\nI1006 21:31:02.943673    4640 log.go:172] (0x4000a7a3c0) (5) Data frame handling\nI1006 21:31:02.943917    4640 log.go:172] (0x4000a7a320) (3) Data frame handling\nI1006 21:31:02.944104    4640 log.go:172] (0x40000140b0) Data frame received for 1\nI1006 21:31:02.944211    4640 log.go:172] (0x4000a7a140) (1) Data frame handling\nI1006 21:31:02.945261    4640 log.go:172] (0x4000a7a3c0) (5) Data frame sent\nI1006 21:31:02.945719    4640 log.go:172] (0x40000140b0) Data frame received for 5\nI1006 21:31:02.945843    4640 log.go:172] (0x4000a7a3c0) (5) Data frame handling\nI1006 21:31:02.945964    4640 log.go:172] (0x4000a7a140) (1) Data frame sent\n+ nc -zv -t -w 2 10.101.13.73 80\nConnection to 10.101.13.73 80 port [tcp/http] succeeded!\nI1006 21:31:02.950021    4640 log.go:172] (0x40000140b0) (0x4000a7a140) Stream removed, broadcasting: 1\nI1006 21:31:02.950902    4640 log.go:172] (0x40000140b0) Go away received\nI1006 21:31:02.954023    4640 log.go:172] (0x40000140b0) (0x4000a7a140) Stream removed, broadcasting: 1\nI1006 21:31:02.954321    4640 log.go:172] (0x40000140b0) (0x4000a7a320) Stream removed, broadcasting: 3\nI1006 21:31:02.954506    4640 log.go:172] (0x40000140b0) (0x4000a7a3c0) Stream removed, broadcasting: 5\n"
Oct  6 21:31:02.964: INFO: stdout: ""
Oct  6 21:31:02.964: INFO: Cleaning up the ExternalName to ClusterIP test service
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 21:31:03.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-6232" for this suite.
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:14.308 seconds]
[sig-network] Services
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":278,"completed":254,"skipped":4161,"failed":0}
SSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should deny crd creation [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 21:31:03.088: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Oct  6 21:31:06.981: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Oct  6 21:31:09.563: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737616666, loc:(*time.Location)(0x7271fa0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737616666, loc:(*time.Location)(0x7271fa0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737616667, loc:(*time.Location)(0x7271fa0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737616666, loc:(*time.Location)(0x7271fa0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Oct  6 21:31:11.625: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737616666, loc:(*time.Location)(0x7271fa0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737616666, loc:(*time.Location)(0x7271fa0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737616667, loc:(*time.Location)(0x7271fa0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737616666, loc:(*time.Location)(0x7271fa0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Oct  6 21:31:14.604: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should deny crd creation [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the crd webhook via the AdmissionRegistration API
STEP: Creating a custom resource definition that should be denied by the webhook
Oct  6 21:31:14.634: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 21:31:14.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-208" for this suite.
STEP: Destroying namespace "webhook-208-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:11.671 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should deny crd creation [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":278,"completed":255,"skipped":4164,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 21:31:14.760: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0777 on tmpfs
Oct  6 21:31:14.851: INFO: Waiting up to 5m0s for pod "pod-960abe2d-1920-4656-80d5-7dab6bc17b8e" in namespace "emptydir-3360" to be "success or failure"
Oct  6 21:31:14.867: INFO: Pod "pod-960abe2d-1920-4656-80d5-7dab6bc17b8e": Phase="Pending", Reason="", readiness=false. Elapsed: 16.452047ms
Oct  6 21:31:16.874: INFO: Pod "pod-960abe2d-1920-4656-80d5-7dab6bc17b8e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023447679s
Oct  6 21:31:18.882: INFO: Pod "pod-960abe2d-1920-4656-80d5-7dab6bc17b8e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030678713s
STEP: Saw pod success
Oct  6 21:31:18.882: INFO: Pod "pod-960abe2d-1920-4656-80d5-7dab6bc17b8e" satisfied condition "success or failure"
Oct  6 21:31:18.887: INFO: Trying to get logs from node jerma-worker pod pod-960abe2d-1920-4656-80d5-7dab6bc17b8e container test-container: 
STEP: delete the pod
Oct  6 21:31:18.924: INFO: Waiting for pod pod-960abe2d-1920-4656-80d5-7dab6bc17b8e to disappear
Oct  6 21:31:18.937: INFO: Pod pod-960abe2d-1920-4656-80d5-7dab6bc17b8e no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 21:31:18.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3360" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":256,"skipped":4174,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicationController
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 21:31:18.954: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 21:31:24.315: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-3161" for this suite.

• [SLOW TEST:5.377 seconds]
[sig-apps] ReplicationController
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":278,"completed":257,"skipped":4218,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 21:31:24.335: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name secret-emptykey-test-624c6489-558c-4bcb-941d-999c5e460a98
[AfterEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 21:31:24.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-383" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":278,"completed":258,"skipped":4250,"failed":0}
SSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 21:31:24.422: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Oct  6 21:31:24.544: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-8578 /api/v1/namespaces/watch-8578/configmaps/e2e-watch-test-resource-version aff4997e-a454-4f82-9eff-010d911d9988 3622553 0 2020-10-06 21:31:24 +0000 UTC   map[watch-this-configmap:from-resource-version] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Oct  6 21:31:24.545: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-8578 /api/v1/namespaces/watch-8578/configmaps/e2e-watch-test-resource-version aff4997e-a454-4f82-9eff-010d911d9988 3622554 0 2020-10-06 21:31:24 +0000 UTC   map[watch-this-configmap:from-resource-version] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 21:31:24.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-8578" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":278,"completed":259,"skipped":4254,"failed":0}
SSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 21:31:24.565: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
Oct  6 21:31:24.610: INFO: PodSpec: initContainers in spec.initContainers
Oct  6 21:32:13.548: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-91dce7d7-9342-4a4f-8c6f-4e64bdd1d891", GenerateName:"", Namespace:"init-container-6026", SelfLink:"/api/v1/namespaces/init-container-6026/pods/pod-init-91dce7d7-9342-4a4f-8c6f-4e64bdd1d891", UID:"c71138dd-f432-41b4-b1ad-6929ca3a39ff", ResourceVersion:"3622756", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63737616684, loc:(*time.Location)(0x7271fa0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"609618133"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-h7dl9", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0x4002b12ec0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-h7dl9", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-h7dl9", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-h7dl9", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4005e5ed18), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"jerma-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x4002edc480), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0x4005e5eda0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0x4005e5edc0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0x4005e5edc8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0x4005e5edcc), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737616684, loc:(*time.Location)(0x7271fa0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737616684, loc:(*time.Location)(0x7271fa0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737616684, loc:(*time.Location)(0x7271fa0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737616684, loc:(*time.Location)(0x7271fa0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.9", PodIP:"10.244.2.69", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.2.69"}}, StartTime:(*v1.Time)(0x40050ef260), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0x4000499260)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0x40004992d0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://aae9aa2d3c74c76ac131105ae978b65636c88aa7188b955073d713be9d5141c1", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0x40050ef2a0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0x40050ef280), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0x4005e5ee4f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 21:32:13.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-6026" for this suite.

• [SLOW TEST:49.013 seconds]
[k8s.io] InitContainer [NodeConformance]
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":278,"completed":260,"skipped":4264,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 21:32:13.580: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-map-ff9784d5-159b-49a0-9c62-ea75c788ef52
STEP: Creating a pod to test consume configMaps
Oct  6 21:32:13.690: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e487f033-99cd-44ea-93c3-ebe4e0999b64" in namespace "projected-6235" to be "success or failure"
Oct  6 21:32:13.708: INFO: Pod "pod-projected-configmaps-e487f033-99cd-44ea-93c3-ebe4e0999b64": Phase="Pending", Reason="", readiness=false. Elapsed: 17.84151ms
Oct  6 21:32:15.745: INFO: Pod "pod-projected-configmaps-e487f033-99cd-44ea-93c3-ebe4e0999b64": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055604291s
Oct  6 21:32:17.818: INFO: Pod "pod-projected-configmaps-e487f033-99cd-44ea-93c3-ebe4e0999b64": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.128416004s
STEP: Saw pod success
Oct  6 21:32:17.819: INFO: Pod "pod-projected-configmaps-e487f033-99cd-44ea-93c3-ebe4e0999b64" satisfied condition "success or failure"
Oct  6 21:32:17.843: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-e487f033-99cd-44ea-93c3-ebe4e0999b64 container projected-configmap-volume-test: 
STEP: delete the pod
Oct  6 21:32:18.077: INFO: Waiting for pod pod-projected-configmaps-e487f033-99cd-44ea-93c3-ebe4e0999b64 to disappear
Oct  6 21:32:18.087: INFO: Pod pod-projected-configmaps-e487f033-99cd-44ea-93c3-ebe4e0999b64 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 21:32:18.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6235" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":261,"skipped":4294,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] version v1
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 21:32:18.101: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource  [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Oct  6 21:32:18.172: INFO: (0) /api/v1/nodes/jerma-worker/proxy/logs/: 
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should rollback without unnecessary restarts [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Oct  6 21:32:18.403: INFO: Create a RollingUpdate DaemonSet
Oct  6 21:32:18.408: INFO: Check that daemon pods launch on every node of the cluster
Oct  6 21:32:18.439: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct  6 21:32:18.459: INFO: Number of nodes with available pods: 0
Oct  6 21:32:18.459: INFO: Node jerma-worker is running more than one daemon pod
Oct  6 21:32:19.552: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct  6 21:32:19.559: INFO: Number of nodes with available pods: 0
Oct  6 21:32:19.559: INFO: Node jerma-worker is running more than one daemon pod
Oct  6 21:32:20.470: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct  6 21:32:20.476: INFO: Number of nodes with available pods: 0
Oct  6 21:32:20.476: INFO: Node jerma-worker is running more than one daemon pod
Oct  6 21:32:21.472: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct  6 21:32:21.527: INFO: Number of nodes with available pods: 0
Oct  6 21:32:21.527: INFO: Node jerma-worker is running more than one daemon pod
Oct  6 21:32:22.471: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct  6 21:32:22.476: INFO: Number of nodes with available pods: 0
Oct  6 21:32:22.476: INFO: Node jerma-worker is running more than one daemon pod
Oct  6 21:32:23.517: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct  6 21:32:23.522: INFO: Number of nodes with available pods: 2
Oct  6 21:32:23.522: INFO: Number of running nodes: 2, number of available pods: 2
Oct  6 21:32:23.522: INFO: Update the DaemonSet to trigger a rollout
Oct  6 21:32:23.532: INFO: Updating DaemonSet daemon-set
Oct  6 21:32:34.592: INFO: Roll back the DaemonSet before rollout is complete
Oct  6 21:32:34.601: INFO: Updating DaemonSet daemon-set
Oct  6 21:32:34.601: INFO: Make sure DaemonSet rollback is complete
Oct  6 21:32:34.681: INFO: Wrong image for pod: daemon-set-h26rl. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Oct  6 21:32:34.681: INFO: Pod daemon-set-h26rl is not available
Oct  6 21:32:34.712: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct  6 21:32:35.721: INFO: Wrong image for pod: daemon-set-h26rl. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Oct  6 21:32:35.721: INFO: Pod daemon-set-h26rl is not available
Oct  6 21:32:35.728: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct  6 21:32:36.721: INFO: Pod daemon-set-gnd8q is not available
Oct  6 21:32:36.730: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4314, will wait for the garbage collector to delete the pods
Oct  6 21:32:36.804: INFO: Deleting DaemonSet.extensions daemon-set took: 6.929249ms
Oct  6 21:32:37.105: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.877623ms
Oct  6 21:32:44.309: INFO: Number of nodes with available pods: 0
Oct  6 21:32:44.309: INFO: Number of running nodes: 0, number of available pods: 0
Oct  6 21:32:44.312: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4314/daemonsets","resourceVersion":"3622965"},"items":null}

Oct  6 21:32:44.316: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4314/pods","resourceVersion":"3622965"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 21:32:44.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-4314" for this suite.

• [SLOW TEST:26.054 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should rollback without unnecessary restarts [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":278,"completed":263,"skipped":4329,"failed":0}
S
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Docker Containers
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 21:32:44.345: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test override arguments
Oct  6 21:32:44.464: INFO: Waiting up to 5m0s for pod "client-containers-d5b7d083-011a-4338-8892-86fba66fc8bf" in namespace "containers-7583" to be "success or failure"
Oct  6 21:32:44.544: INFO: Pod "client-containers-d5b7d083-011a-4338-8892-86fba66fc8bf": Phase="Pending", Reason="", readiness=false. Elapsed: 79.838448ms
Oct  6 21:32:46.549: INFO: Pod "client-containers-d5b7d083-011a-4338-8892-86fba66fc8bf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.085378754s
Oct  6 21:32:48.561: INFO: Pod "client-containers-d5b7d083-011a-4338-8892-86fba66fc8bf": Phase="Running", Reason="", readiness=true. Elapsed: 4.096826951s
Oct  6 21:32:50.567: INFO: Pod "client-containers-d5b7d083-011a-4338-8892-86fba66fc8bf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.103523772s
STEP: Saw pod success
Oct  6 21:32:50.568: INFO: Pod "client-containers-d5b7d083-011a-4338-8892-86fba66fc8bf" satisfied condition "success or failure"
Oct  6 21:32:50.572: INFO: Trying to get logs from node jerma-worker pod client-containers-d5b7d083-011a-4338-8892-86fba66fc8bf container test-container: 
STEP: delete the pod
Oct  6 21:32:50.595: INFO: Waiting for pod client-containers-d5b7d083-011a-4338-8892-86fba66fc8bf to disappear
Oct  6 21:32:50.611: INFO: Pod client-containers-d5b7d083-011a-4338-8892-86fba66fc8bf no longer exists
[AfterEach] [k8s.io] Docker Containers
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 21:32:50.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-7583" for this suite.

• [SLOW TEST:6.278 seconds]
[k8s.io] Docker Containers
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":278,"completed":264,"skipped":4330,"failed":0}
SSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 21:32:50.624: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] deployment should delete old replica sets [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Oct  6 21:32:50.752: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Oct  6 21:32:55.767: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Oct  6 21:32:55.768: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Oct  6 21:32:59.833: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:{test-cleanup-deployment  deployment-9273 /apis/apps/v1/namespaces/deployment-9273/deployments/test-cleanup-deployment 0a8f1653-0ca6-4596-8b74-8eb817c89dbf 3623106 1 2020-10-06 21:32:55 +0000 UTC   map[name:cleanup-pod] map[deployment.kubernetes.io/revision:1] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x4003119c78  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-10-06 21:32:55 +0000 UTC,LastTransitionTime:2020-10-06 21:32:55 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-cleanup-deployment-55ffc6b7b6" has successfully progressed.,LastUpdateTime:2020-10-06 21:32:58 +0000 UTC,LastTransitionTime:2020-10-06 21:32:55 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

Oct  6 21:32:59.839: INFO: New ReplicaSet "test-cleanup-deployment-55ffc6b7b6" of Deployment "test-cleanup-deployment":
&ReplicaSet{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6  deployment-9273 /apis/apps/v1/namespaces/deployment-9273/replicasets/test-cleanup-deployment-55ffc6b7b6 b61c3a7d-96e2-48ca-9ad5-f1d91181805b 3623096 1 2020-10-06 21:32:55 +0000 UTC   map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 0a8f1653-0ca6-4596-8b74-8eb817c89dbf 0x400567c057 0x400567c058}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55ffc6b7b6,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x400567c0c8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Oct  6 21:32:59.846: INFO: Pod "test-cleanup-deployment-55ffc6b7b6-88zn9" is available:
&Pod{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6-88zn9 test-cleanup-deployment-55ffc6b7b6- deployment-9273 /api/v1/namespaces/deployment-9273/pods/test-cleanup-deployment-55ffc6b7b6-88zn9 8015d581-f38d-4c31-9bf0-59a10c02a426 3623095 0 2020-10-06 21:32:55 +0000 UTC   map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-55ffc6b7b6 b61c3a7d-96e2-48ca-9ad5-f1d91181805b 0x400567c447 0x400567c448}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-sbfqj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-sbfqj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-sbfqj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 21:32:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 21:32:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 21:32:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 21:32:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.10,PodIP:10.244.1.171,StartTime:2020-10-06 21:32:55 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-10-06 21:32:58 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://39ed5d4cc1d1d0cfb9d9131ac0b695a7bf09d6a96027b7e8257e27d0eb9b79b2,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.171,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 21:32:59.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-9273" for this suite.

• [SLOW TEST:9.234 seconds]
[sig-apps] Deployment
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":278,"completed":265,"skipped":4337,"failed":0}
SSSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support --unix-socket=/path  [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 21:32:59.859: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should support --unix-socket=/path  [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Starting the proxy
Oct  6 21:32:59.965: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix568065682/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 21:33:00.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6237" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path  [Conformance]","total":278,"completed":266,"skipped":4343,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicaSet
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 21:33:01.001: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Oct  6 21:33:01.059: INFO: Creating ReplicaSet my-hostname-basic-978a035d-1396-4ea4-a961-4cc62b59226b
Oct  6 21:33:01.070: INFO: Pod name my-hostname-basic-978a035d-1396-4ea4-a961-4cc62b59226b: Found 0 pods out of 1
Oct  6 21:33:06.083: INFO: Pod name my-hostname-basic-978a035d-1396-4ea4-a961-4cc62b59226b: Found 1 pods out of 1
Oct  6 21:33:06.084: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-978a035d-1396-4ea4-a961-4cc62b59226b" is running
Oct  6 21:33:06.089: INFO: Pod "my-hostname-basic-978a035d-1396-4ea4-a961-4cc62b59226b-zggng" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-10-06 21:33:01 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-10-06 21:33:04 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-10-06 21:33:04 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-10-06 21:33:01 +0000 UTC Reason: Message:}])
Oct  6 21:33:06.090: INFO: Trying to dial the pod
Oct  6 21:33:11.107: INFO: Controller my-hostname-basic-978a035d-1396-4ea4-a961-4cc62b59226b: Got expected result from replica 1 [my-hostname-basic-978a035d-1396-4ea4-a961-4cc62b59226b-zggng]: "my-hostname-basic-978a035d-1396-4ea4-a961-4cc62b59226b-zggng", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 21:33:11.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-9200" for this suite.

• [SLOW TEST:10.121 seconds]
[sig-apps] ReplicaSet
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","total":278,"completed":267,"skipped":4365,"failed":0}
SS
------------------------------
[sig-cli] Kubectl client Kubectl label 
  should update the label on a resource  [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 21:33:11.123: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Kubectl label
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1276
STEP: creating the pod
Oct  6 21:33:11.209: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6132'
Oct  6 21:33:12.834: INFO: stderr: ""
Oct  6 21:33:12.835: INFO: stdout: "pod/pause created\n"
Oct  6 21:33:12.835: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Oct  6 21:33:12.835: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-6132" to be "running and ready"
Oct  6 21:33:12.847: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 11.594502ms
Oct  6 21:33:14.853: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01794324s
Oct  6 21:33:16.859: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.023610849s
Oct  6 21:33:16.859: INFO: Pod "pause" satisfied condition "running and ready"
Oct  6 21:33:16.859: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: adding the label testing-label with value testing-label-value to a pod
Oct  6 21:33:16.860: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-6132'
Oct  6 21:33:18.144: INFO: stderr: ""
Oct  6 21:33:18.144: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Oct  6 21:33:18.145: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-6132'
Oct  6 21:33:19.402: INFO: stderr: ""
Oct  6 21:33:19.403: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          7s    testing-label-value\n"
STEP: removing the label testing-label of a pod
Oct  6 21:33:19.404: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-6132'
Oct  6 21:33:20.660: INFO: stderr: ""
Oct  6 21:33:20.660: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Oct  6 21:33:20.661: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-6132'
Oct  6 21:33:21.923: INFO: stderr: ""
Oct  6 21:33:21.923: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          9s    \n"
[AfterEach] Kubectl label
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1283
STEP: using delete to clean up resources
Oct  6 21:33:21.924: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6132'
Oct  6 21:33:23.200: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Oct  6 21:33:23.201: INFO: stdout: "pod \"pause\" force deleted\n"
Oct  6 21:33:23.201: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-6132'
Oct  6 21:33:24.470: INFO: stderr: "No resources found in kubectl-6132 namespace.\n"
Oct  6 21:33:24.471: INFO: stdout: ""
Oct  6 21:33:24.471: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-6132 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Oct  6 21:33:25.735: INFO: stderr: ""
Oct  6 21:33:25.735: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 21:33:25.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6132" for this suite.

• [SLOW TEST:14.625 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl label
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1273
    should update the label on a resource  [Conformance]
    /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource  [Conformance]","total":278,"completed":268,"skipped":4367,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 21:33:25.753: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod busybox-b0715fb4-875f-4c27-8ba5-35cee3d0eb55 in namespace container-probe-8360
Oct  6 21:33:29.885: INFO: Started pod busybox-b0715fb4-875f-4c27-8ba5-35cee3d0eb55 in namespace container-probe-8360
STEP: checking the pod's current state and verifying that restartCount is present
Oct  6 21:33:29.890: INFO: Initial restart count of pod busybox-b0715fb4-875f-4c27-8ba5-35cee3d0eb55 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 21:37:30.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-8360" for this suite.

• [SLOW TEST:244.790 seconds]
[k8s.io] Probing container
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":269,"skipped":4412,"failed":0}
SSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 21:37:30.544: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Oct  6 21:37:40.151: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 21:37:40.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-8750" for this suite.

• [SLOW TEST:9.728 seconds]
[k8s.io] Container Runtime
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  blackbox test
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":270,"skipped":4418,"failed":0}
SSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to NodePort [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 21:37:40.274: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to change the type from ExternalName to NodePort [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a service externalname-service with the type=ExternalName in namespace services-4581
STEP: changing the ExternalName service to type=NodePort
STEP: creating replication controller externalname-service in namespace services-4581
I1006 21:37:40.874119       7 runners.go:189] Created replication controller with name: externalname-service, namespace: services-4581, replica count: 2
I1006 21:37:43.925580       7 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1006 21:37:46.926047       7 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Oct  6 21:37:46.926: INFO: Creating new exec pod
Oct  6 21:37:53.999: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-4581 execpodjg975 -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Oct  6 21:37:55.506: INFO: stderr: "I1006 21:37:55.385052    4860 log.go:172] (0x400011a2c0) (0x40009b2000) Create stream\nI1006 21:37:55.389385    4860 log.go:172] (0x400011a2c0) (0x40009b2000) Stream added, broadcasting: 1\nI1006 21:37:55.404614    4860 log.go:172] (0x400011a2c0) Reply frame received for 1\nI1006 21:37:55.405862    4860 log.go:172] (0x400011a2c0) (0x40009b20a0) Create stream\nI1006 21:37:55.405971    4860 log.go:172] (0x400011a2c0) (0x40009b20a0) Stream added, broadcasting: 3\nI1006 21:37:55.407577    4860 log.go:172] (0x400011a2c0) Reply frame received for 3\nI1006 21:37:55.407856    4860 log.go:172] (0x400011a2c0) (0x4000825a40) Create stream\nI1006 21:37:55.407912    4860 log.go:172] (0x400011a2c0) (0x4000825a40) Stream added, broadcasting: 5\nI1006 21:37:55.409171    4860 log.go:172] (0x400011a2c0) Reply frame received for 5\nI1006 21:37:55.483711    4860 log.go:172] (0x400011a2c0) Data frame received for 5\nI1006 21:37:55.484057    4860 log.go:172] (0x4000825a40) (5) Data frame handling\nI1006 21:37:55.485093    4860 log.go:172] (0x4000825a40) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI1006 21:37:55.492124    4860 log.go:172] (0x400011a2c0) Data frame received for 5\nI1006 21:37:55.492379    4860 log.go:172] (0x4000825a40) (5) Data frame handling\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI1006 21:37:55.492496    4860 log.go:172] (0x400011a2c0) Data frame received for 3\nI1006 21:37:55.492563    4860 log.go:172] (0x40009b20a0) (3) Data frame handling\nI1006 21:37:55.492631    4860 log.go:172] (0x4000825a40) (5) Data frame sent\nI1006 21:37:55.492695    4860 log.go:172] (0x400011a2c0) Data frame received for 5\nI1006 21:37:55.492736    4860 log.go:172] (0x4000825a40) (5) Data frame handling\nI1006 21:37:55.493413    4860 log.go:172] (0x400011a2c0) Data frame received for 1\nI1006 21:37:55.493470    4860 log.go:172] (0x40009b2000) (1) Data frame handling\nI1006 21:37:55.493518    4860 log.go:172] (0x40009b2000) (1) Data frame sent\nI1006 21:37:55.494966    4860 log.go:172] (0x400011a2c0) (0x40009b2000) Stream removed, broadcasting: 1\nI1006 21:37:55.497188    4860 log.go:172] (0x400011a2c0) Go away received\nI1006 21:37:55.499383    4860 log.go:172] (0x400011a2c0) (0x40009b2000) Stream removed, broadcasting: 1\nI1006 21:37:55.499755    4860 log.go:172] (0x400011a2c0) (0x40009b20a0) Stream removed, broadcasting: 3\nI1006 21:37:55.500422    4860 log.go:172] (0x400011a2c0) (0x4000825a40) Stream removed, broadcasting: 5\n"
Oct  6 21:37:55.507: INFO: stdout: ""
Oct  6 21:37:55.511: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-4581 execpodjg975 -- /bin/sh -x -c nc -zv -t -w 2 10.100.191.235 80'
Oct  6 21:37:57.004: INFO: stderr: "I1006 21:37:56.870493    4882 log.go:172] (0x40001042c0) (0x400080dcc0) Create stream\nI1006 21:37:56.873232    4882 log.go:172] (0x40001042c0) (0x400080dcc0) Stream added, broadcasting: 1\nI1006 21:37:56.886171    4882 log.go:172] (0x40001042c0) Reply frame received for 1\nI1006 21:37:56.887130    4882 log.go:172] (0x40001042c0) (0x400080dd60) Create stream\nI1006 21:37:56.887222    4882 log.go:172] (0x40001042c0) (0x400080dd60) Stream added, broadcasting: 3\nI1006 21:37:56.888574    4882 log.go:172] (0x40001042c0) Reply frame received for 3\nI1006 21:37:56.888969    4882 log.go:172] (0x40001042c0) (0x4000772000) Create stream\nI1006 21:37:56.889046    4882 log.go:172] (0x40001042c0) (0x4000772000) Stream added, broadcasting: 5\nI1006 21:37:56.890135    4882 log.go:172] (0x40001042c0) Reply frame received for 5\nI1006 21:37:56.984570    4882 log.go:172] (0x40001042c0) Data frame received for 1\nI1006 21:37:56.984829    4882 log.go:172] (0x40001042c0) Data frame received for 3\nI1006 21:37:56.985126    4882 log.go:172] (0x40001042c0) Data frame received for 5\nI1006 21:37:56.985218    4882 log.go:172] (0x400080dcc0) (1) Data frame handling\nI1006 21:37:56.985380    4882 log.go:172] (0x400080dd60) (3) Data frame handling\nI1006 21:37:56.985478    4882 log.go:172] (0x4000772000) (5) Data frame handling\nI1006 21:37:56.985950    4882 log.go:172] (0x400080dcc0) (1) Data frame sent\nI1006 21:37:56.986857    4882 log.go:172] (0x4000772000) (5) Data frame sent\nI1006 21:37:56.986971    4882 log.go:172] (0x40001042c0) Data frame received for 5\n+ nc -zv -t -w 2 10.100.191.235 80\nConnection to 10.100.191.235 80 port [tcp/http] succeeded!\nI1006 21:37:56.988172    4882 log.go:172] (0x40001042c0) (0x400080dcc0) Stream removed, broadcasting: 1\nI1006 21:37:56.990806    4882 log.go:172] (0x4000772000) (5) Data frame handling\nI1006 21:37:56.992426    4882 log.go:172] (0x40001042c0) Go away received\nI1006 21:37:56.996521    4882 log.go:172] (0x40001042c0) (0x400080dcc0) Stream removed, broadcasting: 1\nI1006 21:37:56.997006    4882 log.go:172] (0x40001042c0) (0x400080dd60) Stream removed, broadcasting: 3\nI1006 21:37:56.997265    4882 log.go:172] (0x40001042c0) (0x4000772000) Stream removed, broadcasting: 5\n"
Oct  6 21:37:57.005: INFO: stdout: ""
Oct  6 21:37:57.005: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-4581 execpodjg975 -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.9 32702'
Oct  6 21:37:58.425: INFO: stderr: "I1006 21:37:58.313160    4905 log.go:172] (0x4000b24a50) (0x4000746000) Create stream\nI1006 21:37:58.316059    4905 log.go:172] (0x4000b24a50) (0x4000746000) Stream added, broadcasting: 1\nI1006 21:37:58.327188    4905 log.go:172] (0x4000b24a50) Reply frame received for 1\nI1006 21:37:58.328017    4905 log.go:172] (0x4000b24a50) (0x40007460a0) Create stream\nI1006 21:37:58.328097    4905 log.go:172] (0x4000b24a50) (0x40007460a0) Stream added, broadcasting: 3\nI1006 21:37:58.329877    4905 log.go:172] (0x4000b24a50) Reply frame received for 3\nI1006 21:37:58.330204    4905 log.go:172] (0x4000b24a50) (0x4000815e00) Create stream\nI1006 21:37:58.330269    4905 log.go:172] (0x4000b24a50) (0x4000815e00) Stream added, broadcasting: 5\nI1006 21:37:58.331438    4905 log.go:172] (0x4000b24a50) Reply frame received for 5\nI1006 21:37:58.408497    4905 log.go:172] (0x4000b24a50) Data frame received for 5\nI1006 21:37:58.408671    4905 log.go:172] (0x4000815e00) (5) Data frame handling\nI1006 21:37:58.408817    4905 log.go:172] (0x4000b24a50) Data frame received for 3\nI1006 21:37:58.409037    4905 log.go:172] (0x40007460a0) (3) Data frame handling\nI1006 21:37:58.409094    4905 log.go:172] (0x4000815e00) (5) Data frame sent\nI1006 21:37:58.409578    4905 log.go:172] (0x4000b24a50) Data frame received for 5\nI1006 21:37:58.409635    4905 log.go:172] (0x4000815e00) (5) Data frame handling\nI1006 21:37:58.410147    4905 log.go:172] (0x4000b24a50) Data frame received for 1\nI1006 21:37:58.410268    4905 log.go:172] (0x4000746000) (1) Data frame handling\nI1006 21:37:58.410354    4905 log.go:172] (0x4000746000) (1) Data frame sent\n+ nc -zv -t -w 2 172.18.0.9 32702\nConnection to 172.18.0.9 32702 port [tcp/32702] succeeded!\nI1006 21:37:58.411075    4905 log.go:172] (0x4000815e00) (5) Data frame sent\nI1006 21:37:58.411150    4905 log.go:172] (0x4000b24a50) Data frame received for 5\nI1006 21:37:58.411217    4905 log.go:172] (0x4000815e00) (5) Data frame handling\nI1006 21:37:58.412243    4905 log.go:172] (0x4000b24a50) (0x4000746000) Stream removed, broadcasting: 1\nI1006 21:37:58.415863    4905 log.go:172] (0x4000b24a50) Go away received\nI1006 21:37:58.417761    4905 log.go:172] (0x4000b24a50) (0x4000746000) Stream removed, broadcasting: 1\nI1006 21:37:58.418063    4905 log.go:172] (0x4000b24a50) (0x40007460a0) Stream removed, broadcasting: 3\nI1006 21:37:58.418681    4905 log.go:172] (0x4000b24a50) (0x4000815e00) Stream removed, broadcasting: 5\n"
Oct  6 21:37:58.426: INFO: stdout: ""
Oct  6 21:37:58.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-4581 execpodjg975 -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.10 32702'
Oct  6 21:37:59.873: INFO: stderr: "I1006 21:37:59.766751    4929 log.go:172] (0x4000a680b0) (0x40007e9f40) Create stream\nI1006 21:37:59.771061    4929 log.go:172] (0x4000a680b0) (0x40007e9f40) Stream added, broadcasting: 1\nI1006 21:37:59.784805    4929 log.go:172] (0x4000a680b0) Reply frame received for 1\nI1006 21:37:59.785551    4929 log.go:172] (0x4000a680b0) (0x40007c0000) Create stream\nI1006 21:37:59.785608    4929 log.go:172] (0x4000a680b0) (0x40007c0000) Stream added, broadcasting: 3\nI1006 21:37:59.787507    4929 log.go:172] (0x4000a680b0) Reply frame received for 3\nI1006 21:37:59.787741    4929 log.go:172] (0x4000a680b0) (0x40007c6000) Create stream\nI1006 21:37:59.787785    4929 log.go:172] (0x4000a680b0) (0x40007c6000) Stream added, broadcasting: 5\nI1006 21:37:59.788974    4929 log.go:172] (0x4000a680b0) Reply frame received for 5\nI1006 21:37:59.854449    4929 log.go:172] (0x4000a680b0) Data frame received for 5\nI1006 21:37:59.854588    4929 log.go:172] (0x4000a680b0) Data frame received for 3\nI1006 21:37:59.854723    4929 log.go:172] (0x4000a680b0) Data frame received for 1\nI1006 21:37:59.854840    4929 log.go:172] (0x40007c0000) (3) Data frame handling\nI1006 21:37:59.855077    4929 log.go:172] (0x40007e9f40) (1) Data frame handling\nI1006 21:37:59.855315    4929 log.go:172] (0x40007c6000) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.10 32702\nConnection to 172.18.0.10 32702 port [tcp/32702] succeeded!\nI1006 21:37:59.858366    4929 log.go:172] (0x40007e9f40) (1) Data frame sent\nI1006 21:37:59.858570    4929 log.go:172] (0x40007c6000) (5) Data frame sent\nI1006 21:37:59.859216    4929 log.go:172] (0x4000a680b0) Data frame received for 5\nI1006 21:37:59.860213    4929 log.go:172] (0x4000a680b0) (0x40007e9f40) Stream removed, broadcasting: 1\nI1006 21:37:59.861387    4929 log.go:172] (0x40007c6000) (5) Data frame handling\nI1006 21:37:59.862358    4929 log.go:172] (0x4000a680b0) Go away received\nI1006 21:37:59.865699    4929 log.go:172] (0x4000a680b0) (0x40007e9f40) Stream removed, broadcasting: 1\nI1006 21:37:59.866037    4929 log.go:172] (0x4000a680b0) (0x40007c0000) Stream removed, broadcasting: 3\nI1006 21:37:59.866269    4929 log.go:172] (0x4000a680b0) (0x40007c6000) Stream removed, broadcasting: 5\n"
Oct  6 21:37:59.874: INFO: stdout: ""
Oct  6 21:37:59.874: INFO: Cleaning up the ExternalName to NodePort test service
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 21:37:59.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-4581" for this suite.
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:19.674 seconds]
[sig-network] Services
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to NodePort [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":278,"completed":271,"skipped":4427,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  updates the published spec when one version gets renamed [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 21:37:59.950: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates the published spec when one version gets renamed [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: set up a multi version CRD
Oct  6 21:38:00.002: INFO: >>> kubeConfig: /root/.kube/config
STEP: rename a version
STEP: check the new version name is served
STEP: check the old version name is removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 21:39:36.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-3879" for this suite.

• [SLOW TEST:96.464 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  updates the published spec when one version gets renamed [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":278,"completed":272,"skipped":4444,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 21:39:36.417: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-e2d671ea-cc6e-4676-bf2d-02512c11e016
STEP: Creating a pod to test consume configMaps
Oct  6 21:39:37.322: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-4c43f14b-60ed-4f50-b70b-0bdcf68c43b1" in namespace "projected-6181" to be "success or failure"
Oct  6 21:39:38.166: INFO: Pod "pod-projected-configmaps-4c43f14b-60ed-4f50-b70b-0bdcf68c43b1": Phase="Pending", Reason="", readiness=false. Elapsed: 844.096784ms
Oct  6 21:39:40.172: INFO: Pod "pod-projected-configmaps-4c43f14b-60ed-4f50-b70b-0bdcf68c43b1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.850253669s
Oct  6 21:39:42.251: INFO: Pod "pod-projected-configmaps-4c43f14b-60ed-4f50-b70b-0bdcf68c43b1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.929196937s
Oct  6 21:39:44.651: INFO: Pod "pod-projected-configmaps-4c43f14b-60ed-4f50-b70b-0bdcf68c43b1": Phase="Pending", Reason="", readiness=false. Elapsed: 7.32905583s
Oct  6 21:39:46.656: INFO: Pod "pod-projected-configmaps-4c43f14b-60ed-4f50-b70b-0bdcf68c43b1": Phase="Pending", Reason="", readiness=false. Elapsed: 9.333524634s
Oct  6 21:39:48.830: INFO: Pod "pod-projected-configmaps-4c43f14b-60ed-4f50-b70b-0bdcf68c43b1": Phase="Pending", Reason="", readiness=false. Elapsed: 11.508376695s
Oct  6 21:39:50.835: INFO: Pod "pod-projected-configmaps-4c43f14b-60ed-4f50-b70b-0bdcf68c43b1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.513414792s
STEP: Saw pod success
Oct  6 21:39:50.836: INFO: Pod "pod-projected-configmaps-4c43f14b-60ed-4f50-b70b-0bdcf68c43b1" satisfied condition "success or failure"
Oct  6 21:39:50.872: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-4c43f14b-60ed-4f50-b70b-0bdcf68c43b1 container projected-configmap-volume-test: 
STEP: delete the pod
Oct  6 21:39:50.916: INFO: Waiting for pod pod-projected-configmaps-4c43f14b-60ed-4f50-b70b-0bdcf68c43b1 to disappear
Oct  6 21:39:50.944: INFO: Pod pod-projected-configmaps-4c43f14b-60ed-4f50-b70b-0bdcf68c43b1 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 21:39:50.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6181" for this suite.

• [SLOW TEST:14.542 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":278,"completed":273,"skipped":4499,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 21:39:50.960: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating the pod
Oct  6 21:39:55.797: INFO: Successfully updated pod "annotationupdate2ccfc7d4-8dd9-48da-8c86-917c7c5ca650"
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 21:39:59.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4218" for this suite.

• [SLOW TEST:8.872 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":274,"skipped":4515,"failed":0}
SSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 21:39:59.833: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Oct  6 21:40:00.067: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-6846 /api/v1/namespaces/watch-6846/configmaps/e2e-watch-test-label-changed b51ae2b6-0993-4a37-b3ab-5c3726e68a5b 3624573 0 2020-10-06 21:39:59 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Oct  6 21:40:00.067: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-6846 /api/v1/namespaces/watch-6846/configmaps/e2e-watch-test-label-changed b51ae2b6-0993-4a37-b3ab-5c3726e68a5b 3624574 0 2020-10-06 21:39:59 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Oct  6 21:40:00.068: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-6846 /api/v1/namespaces/watch-6846/configmaps/e2e-watch-test-label-changed b51ae2b6-0993-4a37-b3ab-5c3726e68a5b 3624575 0 2020-10-06 21:39:59 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Oct  6 21:40:11.935: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-6846 /api/v1/namespaces/watch-6846/configmaps/e2e-watch-test-label-changed b51ae2b6-0993-4a37-b3ab-5c3726e68a5b 3624608 0 2020-10-06 21:39:59 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Oct  6 21:40:11.936: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-6846 /api/v1/namespaces/watch-6846/configmaps/e2e-watch-test-label-changed b51ae2b6-0993-4a37-b3ab-5c3726e68a5b 3624612 0 2020-10-06 21:39:59 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Oct  6 21:40:11.937: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-6846 /api/v1/namespaces/watch-6846/configmaps/e2e-watch-test-label-changed b51ae2b6-0993-4a37-b3ab-5c3726e68a5b 3624613 0 2020-10-06 21:39:59 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 21:40:11.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-6846" for this suite.

• [SLOW TEST:14.060 seconds]
[sig-api-machinery] Watchers
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":278,"completed":275,"skipped":4525,"failed":0}
SSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl version 
  should check is all data is printed  [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 21:40:13.895: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should check is all data is printed  [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Oct  6 21:40:17.484: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Oct  6 21:40:21.249: INFO: stderr: ""
Oct  6 21:40:21.249: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.12\", GitCommit:\"5ec472285121eb6c451e515bc0a7201413872fa3\", GitTreeState:\"clean\", BuildDate:\"2020-09-16T13:39:51Z\", GoVersion:\"go1.13.15\", Compiler:\"gc\", Platform:\"linux/arm64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.5\", GitCommit:\"e0fccafd69541e3750d460ba0f9743b90336f24f\", GitTreeState:\"clean\", BuildDate:\"2020-05-01T02:11:15Z\", GoVersion:\"go1.13.9\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 21:40:21.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2579" for this suite.

• [SLOW TEST:7.925 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl version
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1467
    should check is all data is printed  [Conformance]
    /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed  [Conformance]","total":278,"completed":276,"skipped":4535,"failed":0}
SSSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 21:40:21.821: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Oct  6 21:40:22.105: INFO: Creating deployment "test-recreate-deployment"
Oct  6 21:40:22.281: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Oct  6 21:40:23.240: INFO: Waiting deployment "test-recreate-deployment" to complete
Oct  6 21:40:24.589: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737617222, loc:(*time.Location)(0x7271fa0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737617222, loc:(*time.Location)(0x7271fa0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737617223, loc:(*time.Location)(0x7271fa0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737617222, loc:(*time.Location)(0x7271fa0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)}
Oct  6 21:40:26.593: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737617222, loc:(*time.Location)(0x7271fa0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737617222, loc:(*time.Location)(0x7271fa0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737617223, loc:(*time.Location)(0x7271fa0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737617222, loc:(*time.Location)(0x7271fa0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)}
Oct  6 21:40:28.595: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737617222, loc:(*time.Location)(0x7271fa0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737617222, loc:(*time.Location)(0x7271fa0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737617223, loc:(*time.Location)(0x7271fa0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737617222, loc:(*time.Location)(0x7271fa0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)}
Oct  6 21:40:30.630: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Oct  6 21:40:30.643: INFO: Updating deployment test-recreate-deployment
Oct  6 21:40:30.643: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Oct  6 21:40:31.334: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:{test-recreate-deployment  deployment-6784 /apis/apps/v1/namespaces/deployment-6784/deployments/test-recreate-deployment 25f41638-674d-4f85-8a38-6eb075ce39fe 3624713 2 2020-10-06 21:40:22 +0000 UTC   map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x4005c6a6c8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-10-06 21:40:30 +0000 UTC,LastTransitionTime:2020-10-06 21:40:30 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-10-06 21:40:30 +0000 UTC,LastTransitionTime:2020-10-06 21:40:22 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},}

Oct  6 21:40:31.373: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff  deployment-6784 /apis/apps/v1/namespaces/deployment-6784/replicasets/test-recreate-deployment-5f94c574ff a2b57615-e7f4-49e5-a267-a8f61ee3e367 3624712 1 2020-10-06 21:40:30 +0000 UTC   map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 25f41638-674d-4f85-8a38-6eb075ce39fe 0x4005c6aa47 0x4005c6aa48}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x4005c6aaa8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Oct  6 21:40:31.373: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Oct  6 21:40:31.373: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-799c574856  deployment-6784 /apis/apps/v1/namespaces/deployment-6784/replicasets/test-recreate-deployment-799c574856 96d6f216-0b9a-4bfc-bfce-8be0c699d0b6 3624702 2 2020-10-06 21:40:22 +0000 UTC   map[name:sample-pod-3 pod-template-hash:799c574856] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 25f41638-674d-4f85-8a38-6eb075ce39fe 0x4005c6ab17 0x4005c6ab18}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 799c574856,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3 pod-template-hash:799c574856] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0x4005c6ab88  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Oct  6 21:40:31.481: INFO: Pod "test-recreate-deployment-5f94c574ff-r9q8j" is not available:
&Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-r9q8j test-recreate-deployment-5f94c574ff- deployment-6784 /api/v1/namespaces/deployment-6784/pods/test-recreate-deployment-5f94c574ff-r9q8j 7ef64bcb-5df0-42b9-8af5-a9f382ee6037 3624714 0 2020-10-06 21:40:30 +0000 UTC   map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff a2b57615-e7f4-49e5-a267-a8f61ee3e367 0x4002fa34b7 0x4002fa34b8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n5zt6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n5zt6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n5zt6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 21:40:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 21:40:30 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 21:40:30 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-06 21:40:30 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.9,PodIP:,StartTime:2020-10-06 21:40:30 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 21:40:31.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-6784" for this suite.

• [SLOW TEST:12.094 seconds]
[sig-apps] Deployment
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":277,"skipped":4543,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Oct  6 21:40:33.917: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Oct  6 21:40:37.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-5757" for this suite.

• [SLOW TEST:5.789 seconds]
[k8s.io] Kubelet
/workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when scheduling a busybox command that always fails in a pod
  /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should be possible to delete [NodeConformance] [Conformance]
    /workspace/anago-v1.17.12-rc.0.60+02c8616ca83844/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":278,"completed":278,"skipped":4563,"failed":0}
SSSSOct  6 21:40:39.709: INFO: Running AfterSuite actions on all nodes
Oct  6 21:40:39.710: INFO: Running AfterSuite actions on node 1
Oct  6 21:40:39.710: INFO: Skipping dumping logs from cluster
{"msg":"Test Suite completed","total":278,"completed":278,"skipped":4567,"failed":0}

Ran 278 of 4845 Specs in 5828.454 seconds
SUCCESS! -- 278 Passed | 0 Failed | 0 Pending | 4567 Skipped
PASS